A Comparison of UAV and Satellites Multispectral Imagery in Monitoring Onion Crop. An Application in the ‘Cipolla Rossa di Tropea’ (Italy)

: Precision agriculture (PA) is a management strategy that analyzes the spatial and temporal variability of agricultural ﬁelds using information and communication technologies with the aim to optimize proﬁtability, sustainability, and protection of agro-ecological services. In the context of PA, this research evaluated the reliability of multispectral (MS) imagery collected at di ﬀ erent spatial resolutions by an unmanned aerial vehicle (UAV) and PlanetScope and Sentinel-2 satellite platforms in monitoring onion crops over three di ﬀ erent dates. The soil adjusted vegetation index (SAVI) was used for monitoring the vigor of the study ﬁeld. Next, the vigor maps from the two satellite platforms with those derived from UAV were compared by statistical analysis in order to evaluate the contribution made by each platform for monitoring onion crops. Besides, the two coverage’s classes of the ﬁeld, bare soil and onions, were spatially identiﬁed using geographical object-based image classiﬁcation (GEOBIA), and their spectral contribution was analyzed comparing the SAVI calculated considering only crop pixels (i.e., SAVI onions) and that calculated considering only bare soil pixels (i.e., SAVI soil) with the SAVI from the three platforms. The results showed that satellite imagery, coherent and correlated with UAV images, could be useful to assess the general conditions of the ﬁeld while UAV permits to discriminate localized circumscribed areas that the lowest resolution of satellites missed, where there are conditions of inhomogeneity in the ﬁeld, determined by abiotic or biotic stresses.


UAV Images
The UAV surveys were carried out between the middle (November) and the end of the cultivation cycle (January). Onion crop was, in this time frame, in the principal stage 4 (development of harvestable vegetative plant parts) (Figure 2) of the BBCH (Biologische Bundesanstalt, Bundessortenamt and Chemical Industry) extended scale [43]. Surveys were carried out at 50 m of flight height using a very light fixed-wing UAV Parrot Disco-Pro AG made with foam and the weight

UAV Images
The UAV surveys were carried out between the middle (November) and the end of the cultivation cycle (January). Onion crop was, in this time frame, in the principal stage 4 (development of harvestable vegetative plant parts) ( Chemical Industry) extended scale [43]. Surveys were carried out at 50 m of flight height using a very light fixed-wing UAV Parrot Disco-Pro AG made with foam and the weight of only 780 g ( Figure 3). The UAV was equipped with an MS camera Parrot Sequoia, a light-weight camera employed in several research works related to PA, on herbaceous crops, monitoring wheat [44], maize and poppy crops [45], phenotyping of soybean [46], and on tree crops proving useful in identifying [47] citrus trees [24], mapping irrigation inhomogeneities in an olive grove [48] and the vigor in vineyards [49]. The Parrot Sequoia MS had four different channels, each with 1.2 Mp of resolution: Green (530-570 nm), Red (640-680 nm), Red Edge (730-740 nm), and NIR (770-810 nm). Furthermore, it was also equipped with an RGB composite sensor, an external irradiance sensor with a global navigation satellite system (GNSS), and inertial measurement unit (IMU) modules placed on top of the UAV. The irradiance sensor, which measures the sky down-welling irradiance, was placed on top of the UAV and continuously captured the sky irradiance at the same spectral bands as the MS camera [50,51]. The IMU allowed capturing sensor angle, sun angle, location, and irradiance for every image taken during the flight. This data was mainly used for image calibration. The UAV flights were carried out three times as follows: 23 November 2018, the second on 19 December 2018, and the last on 18 January 2019. The first two dates, as shown in Figure 2, concerned the phase of the crop cycle of the full development of harvestable vegetative plant parts. In this phase, crucial operations were carried out, such as a large part of the fertilization and phytosanitary treatments. Instead, on the last date of monitoring, in January, onions were close to harvesting.
identifying [47] citrus trees [24], mapping irrigation inhomogeneities in an olive grove [48] and the vigor in vineyards [49]. The Parrot Sequoia MS had four different channels, each with 1.2 Mp of resolution: Green (530-570 nm), Red (640-680 nm), Red Edge (730-740 nm), and NIR (770-810 nm). Furthermore, it was also equipped with an RGB composite sensor, an external irradiance sensor with a global navigation satellite system (GNSS), and inertial measurement unit (IMU) modules placed on top of the UAV. The irradiance sensor, which measures the sky down-welling irradiance, was placed on top of the UAV and continuously captured the sky irradiance at the same spectral bands as the MS camera [50,51]. The IMU allowed capturing sensor angle, sun angle, location, and irradiance for every image taken during the flight. This data was mainly used for image calibration. The UAV flights were carried out three times as follows: 23 November 2018, the second on 19 December 2018, and the last on 18 January 2019. The first two dates, as shown in Figure 2, concerned the phase of the crop cycle of the full development of harvestable vegetative plant parts. In this phase, crucial operations were carried out, such as a large part of the fertilization and phytosanitary treatments. Instead, on the last date of monitoring, in January, onions were close to harvesting.
The procedure performed for field surveys was similar to the one shown in Messina et al. [52,53]. In the field, 9 ground control points (GCPs) were placed whose position was geo-referenced using a Leica RTK GNSS with a planimetric accuracy of 0.03 m. In particular, GCPs were made using 50 cm × 50 cm white polypropylene panels and covering two quadrants using black cardboard to locate the point. MS imagery was calibrated using a panel with known reflectance, the Parrot Sequoia calibration target (Figure 3). In particular, photos of the target were taken before and after the flight, and it was assumed that the raw sensor data was transformed into percentage reflectance in combination with the data provided by the solar radiation sensor. All consecutive images were processed via aerial image triangulation with the geo-tagged flight log and the geographic tags through the software Pix4D mapper (Pix4D S.A., Switzerland). Following the recommended Sequoia image correction procedure, corrections were applied to the raw data, generating four single reflectance-calibrated GEOTIFFs.

Satellite Images
Sentinel-2 is managed through the Copernicus Program proposed by the European Union (EU) and the European Space Agency (ESA). The first satellite was launched in 2015 [54]. Sentinel-2 consists of two twin satellites, Sentinel-2A and Sentinel-2B, both operating on a single orbit plane but phased at 180º with a temporal resolution of 5 days with this bi-satellite system [55,56].
The Sentinel-2 data consists of 13 bands in the visible, NIR, and short-wavelength infrared (SWIR) spectral range [55,57] with a spatial resolution of 10 m, 20 m, or 60 m depending on the band. Sentinel-2 images used in this work were the four spectral bands at 10 m spatial resolution in Blue, Green, Red, and NIR spectra, shown in Table 1. Data were downloaded from the Copernicus Open Access Hub [58] in level 2A, which provides the bottom of atmosphere (BOA) reflectance images ortho-rectified in UTM/WGS84 projection.
PlanetScope's imagery was acquired from the PlanetScope archive [59], which manages the largest satellite constellation consisting of more than 150 satellites orbiting the Earth. PlanetScope satellites follow two different orbital configurations [60]. Some satellites are in International Space Station Orbit (ISS), and they are at a 52° inclination at about 420 km of altitude. Other satellites are in the Sun Synchronous Orbit (SSO) with an altitude of 475 km (at 98° inclination) and have equatorial crossing between 9:30 and 11:30.
These sensors called 3U CubeSats (Table 1), also called "Doves", have small dimensions (10 cm × 10 cm × 30 cm and a weight of 4 kg) and provide daily sun-synchronous coverage images of the whole Earth landmass [60].
Dove satellites' CCD array sensor (6600 × 4400 pixels) allows capturing images using three bands RGB or four (in addition, there is NIR) [61]. PlanetScope imagery has a scene footprint of about 24.4 km × 8.1 km and a ground sample distance of 3.7 m.
PlanetScope imagery we used is the Ortho Scene product (Level 3B), i.e., imagery processed to remove distortions caused by terrain. Imagery is radiometrically, sensor, and geometrically corrected [59]. Furthermore, imagery is atmospherically corrected using the 6S radiative transfer model with ancillary data from the moderate resolution imaging spectroradiometer (MODIS) [62]. Table 1. Characteristics of the multispectral camera and of the satellites whose images were used in this research. The procedure performed for field surveys was similar to the one shown in Messina et al. [52,53]. In the field, 9 ground control points (GCPs) were placed whose position was geo-referenced using a Leica RTK GNSS with a planimetric accuracy of 0.03 m. In particular, GCPs were made using 50 cm × 50 cm white polypropylene panels and covering two quadrants using black cardboard to locate the point. MS imagery was calibrated using a panel with known reflectance, the Parrot Sequoia calibration target ( Figure 3). In particular, photos of the target were taken before and after the flight, and it was assumed that the raw sensor data was transformed into percentage reflectance in combination with the data provided by the solar radiation sensor. All consecutive images were processed via aerial image triangulation with the geo-tagged flight log and the geographic tags through the software Pix4D mapper (Pix4D S.A., Switzerland). Following the recommended Sequoia image correction procedure, corrections were applied to the raw data, generating four single reflectance-calibrated GEOTIFFs.

Satellite Images
Sentinel-2 is managed through the Copernicus Program proposed by the European Union (EU) and the European Space Agency (ESA). The first satellite was launched in 2015 [54]. Sentinel-2 consists of two twin satellites, Sentinel-2A and Sentinel-2B, both operating on a single orbit plane but phased at 180º with a temporal resolution of 5 days with this bi-satellite system [55,56].
The Sentinel-2 data consists of 13 bands in the visible, NIR, and short-wavelength infrared (SWIR) spectral range [55,57] with a spatial resolution of 10 m, 20 m, or 60 m depending on the band. Sentinel-2 images used in this work were the four spectral bands at 10 m spatial resolution in Blue, Green, Red, and NIR spectra, shown in Table 1. Data were downloaded from the Copernicus Open Access Hub [58] in level 2A, which provides the bottom of atmosphere (BOA) reflectance images ortho-rectified in UTM/WGS84 projection.
PlanetScope's imagery was acquired from the PlanetScope archive [59], which manages the largest satellite constellation consisting of more than 150 satellites orbiting the Earth. PlanetScope satellites follow two different orbital configurations [60]. Some satellites are in International Space Station Orbit (ISS), and they are at a 52 • inclination at about 420 km of altitude. Other satellites are in the Sun Remote Sens. 2020, 12, 3424 7 of 27 Synchronous Orbit (SSO) with an altitude of 475 km (at 98 • inclination) and have equatorial crossing between 9:30 and 11:30.
These sensors called 3U CubeSats (Table 1), also called "Doves", have small dimensions (10 cm × 10 cm × 30 cm and a weight of 4 kg) and provide daily sun-synchronous coverage images of the whole Earth landmass [60].
Dove satellites' CCD array sensor (6600 × 4400 pixels) allows capturing images using three bands RGB or four (in addition, there is NIR) [61]. PlanetScope imagery has a scene footprint of about 24.4 km × 8.1 km and a ground sample distance of 3.7 m.
PlanetScope imagery we used is the Ortho Scene product (Level 3B), i.e., imagery processed to remove distortions caused by terrain. Imagery is radiometrically, sensor, and geometrically corrected [59]. Furthermore, imagery is atmospherically corrected using the 6S radiative transfer model with ancillary data from the moderate resolution imaging spectroradiometer (MODIS) [62].

Comparison of Vegetation Indices (VIs) from the Three Platforms
In view of comparing satellite data with UAV data, three satellite images from each satellite platform were collected. Images were selected among those available without cloud cover in the days closest to those of the UAV surveys, as follows: 1) 29 November 2018, 19 December 2018, and 15 January 2019 from Sentinel-2 and 2) 23 November 2018, 19 December 2018, and 19 January 2019 from PlanetScope ( Figure 2). The soil adjusted vegetation index (SAVI) was chosen to analyze the vegetative vigor of onion cultivation. SAVI was developed by Huete [63] to minimize the effects of soil background on the vegetation signal by inserting in the normalized difference vegetation index (NDVI) formula a constant soil adjustment factor L [64], according to the following formula: where L is the constant soil adjustment factor, and it can assume values between 0 and 1, depending

Comparison of Vegetation Indices (VIs) from the Three Platforms
In view of comparing satellite data with UAV data, three satellite images from each satellite platform were collected. Images were selected among those available without cloud cover in the days closest to those of the UAV surveys, as follows: 1) 29 November 2018, 19 December 2018, and 15 January 2019 from Sentinel-2 and 2) 23 November 2018, 19 December 2018, and 19 January 2019 from PlanetScope ( Figure 2). The soil adjusted vegetation index (SAVI) was chosen to analyze the vegetative vigor of onion cultivation. SAVI was developed by Huete [63] to minimize the effects of soil background on the vegetation signal by inserting in the normalized difference vegetation index (NDVI) formula a constant soil adjustment factor L [64], according to the following formula: where L is the constant soil adjustment factor, and it can assume values between 0 and 1, depending on the level of vegetation cover, and ρ is the reflectance at the given wavelength.

Comparison of Vegetation Indices (VIs) from the Three Platforms
In view of comparing satellite data with UAV data, three satellite images from each satellite platform were collected. Images were selected among those available without cloud cover in the days closest to those of the UAV surveys, as follows: 1) 29 November 2018, 19 December 2018, and 15 January 2019 from Sentinel-2 and 2) 23 November 2018, 19 December 2018, and 19 January 2019 from PlanetScope ( Figure 2). The soil adjusted vegetation index (SAVI) was chosen to analyze the vegetative vigor of onion cultivation. SAVI was developed by Huete [63] to minimize the effects of soil background on the vegetation signal by inserting in the normalized difference vegetation index (NDVI) formula a constant soil adjustment factor L [64], according to the following formula: where L is the constant soil adjustment factor, and it can assume values between 0 and 1, depending on the level of vegetation cover, and ρ is the reflectance at the given wavelength.

Comparison of Vegetation Indices (VIs) from the Three Platforms
In view of comparing satellite data with UAV data, three satellite images from each satellite platform were collected. Images were selected among those available without cloud cover in the days closest to those of the UAV surveys, as follows: 1) 29 November 2018, 19 December 2018, and 15 January 2019 from Sentinel-2 and 2) 23 November 2018, 19 December 2018, and 19 January 2019 from PlanetScope ( Figure 2). The soil adjusted vegetation index (SAVI) was chosen to analyze the vegetative vigor of onion cultivation. SAVI was developed by Huete [63] to minimize the effects of soil background on the vegetation signal by inserting in the normalized difference vegetation index (NDVI) formula a constant soil adjustment factor L [64], according to the following formula: where L is the constant soil adjustment factor, and it can assume values between 0 and 1, depending on the level of vegetation cover, and ρ is the reflectance at the given wavelength.

Comparison of Vegetation Indices (VIs) from the Three Platforms
In view of comparing satellite data with UAV data, three satellite images from each satellite platform were collected. Images were selected among those available without cloud cover in the days closest to those of the UAV surveys, as follows: 1) 29 November 2018, 19 December 2018, and 15 January 2019 from Sentinel-2 and 2) 23 November 2018, 19 December 2018, and 19 January 2019 from PlanetScope ( Figure 2). The soil adjusted vegetation index (SAVI) was chosen to analyze the vegetative vigor of onion cultivation. SAVI was developed by Huete [63] to minimize the effects of where L is the constant soil adjustment factor, and it can assume values between 0 and 1, depending on the level of vegetation cover, and ρ is the reflectance at the given wavelength. The SAVI, belonging to the family of soil-corrected vegetation indices [65], is suitable to further reduce the background contribution reflectance by facilitating the identification of plants and their discrimination from the soil.
In this case study, the species monitored, onions, have thin and small leaves, especially in the early and middle stages of growth (mid-August-October, Figure 2), and in monitoring their growth, it is difficult to effectively extract the images from the background [16]. SAVI was calculated at the native MS band's resolution of each sensor (5 cm for UAV, 3 m for PlanetScope, and 10 m for Sentinel-2). The various SAVI maps were used to describe and assess the variability within the onion field, as also shown in Khaliq et al. 2019 [66].
Descriptive statistics and histograms, calculated with R software, were used for a preliminary comparison of image data with native image resolutions. The degree of correlation between pairs of SAVI maps was then investigated using Pearson's correlation coefficients. Initially, three comparisons were made taking into account the three dates investigated: a correlation between UAV (images resampled at 3 m resolution) and PlanetScope, a correlation between PlanetScope (images resampled at 10 m resolution) and Sentinel-2, and, finally, a correlation between UAV (images resampled at 10 m resolution) and Sentinel-2.
Then, to investigate the relationships between crop and soil cover, further correlation analyses were performed between the UAV SAVIs, including only onions and only soil and the SAVIs from the two satellite platforms.

Image Segmentation and Classification
A GEOBIA process was developed to explore the potentiality of UAV images in discriminating soil coverage types and in producing other UAV SAVI maps for the subsequent comparison. Considering the type of crop and the structure of the field, which imply the presence of portions of soil, clearly visible from above, both among the plants and in the paths used for the passage of agricultural vehicles, classification was performed to separate crop and soil. Firstly, to extract the onion crop, the GEOBIA image classification procedure was performed. The classification was developed considering only the spectral response of the vegetation in the different bands. The first step performed in the GEOBIA procedure was the segmentation of the image. It is a fundamental prerequisite for classification/feature extraction [67] and foresees the segmentation of the image into separate, non-overlapping regions [68], then extracted in the form of vectorial objects.
Segmentation, which consists of dividing objects into smaller ones and creating new ones, altering the morphology of the previously existing ones, takes place according to precise rules. Segmentation's algorithm used is the multiresolution algorithm [69].
This algorithm operates by identifying single objects, having a size of a pixel, and merging them with the nearby objects following a criterion of relative homogeneity while minimizing the average heterogeneity [70]. The homogeneity criterion is linked to the combination of the spectral and shape properties of the original image's objects and of those of "new" objects obtained by the merging process. Homogeneity criteria are regulated by two parameters: shape and compactness.
The setting of the shape parameter concerns the importance\weight given by the segmentation process to the shape of objects with respect to color. The shape parameter can assume a value between 0 and 0.9. Color homogeneity derives from the standard deviation of spectral colors. Indeed, shape homogeneity results from the deviation of a compact (or smooth). Color and shape are linked, Remote Sens. 2020, 12, 3424 9 of 27 and the value or weight given by the user to the shape parameter determines different results of the segmentation.
In particular, the higher is the chosen value (between 0 and 0.9), the higher the influence of shape, with respect to the color, in the segmentation, and vice versa [71]. Compactness is the second adjustable parameter and determines the importance of shape with respect to the smoothness. It results in the product of width and length calculated on numbers of pixels [72]. The third parameter is the scale. The scale parameter determines the final size and dimension of the objects resulting from segmentation [67,73].
Attributing higher values or smaller values of scale parameter generates larger and smaller objects, respectively. Since the size of the objects depends on this parameter, it indirectly defines the maximum allowed heterogeneity for the obtained image objects. In addition, different weights can be attributed to the several input data (i.e., band layers). To perform the segmentation, the following parameters were chosen: 0.1 for shape, 0.5 for compactness, 0.3 for scale parameter, and weight 1 assigned to layers that correspond to the bands provided by Parrot Sequoia: Green, Red, Red Edge, and NIR.
Before choosing these parameters, some trial-and-error tests were performed, attributing different values to the segmentation parameters until the segmentation considered better (based on visual interpretation) was obtained. It was essential, in this case, to obtain segments that would allow the single plants to be distinguished.
After completing the segmentation phase, the onion crops were classified based only on a SAVI threshold value of ≥0.25. The value was chosen as a result of some trial-and-error tests and judged better based on visual interpretation of its ability to detect plants, following the methodology used in Modica et al. (2020) [74].
The data obtained concerning the vegetation coverage of the field was used to create a mask (and a second for the soil) to be applied to the map. The masks obtained by exporting a vector file containing only the class "onions" were applied to the UAV images at their native resolution with the aim to obtain only parts of orthomosaics concerning the onion crop.
In order to evaluate the presence of pure and mixed pixels of the vegetation class in Sentinel-2 and PlanetScope images, a spatial analysis procedure was developed in eCognition. First of all, shapefiles containing vector grids matching the pixel size of the Sentinel-2 and PlanetScope images were prepared on eCognition using a chessboard segmentation.
These grids were then superimposed on the classified UAV images at an upper level of the hierarchy. Several levels of segmentation constitute a hierarchical structure in the GEOBIA paradigm [75], and in our case, the super-objects (the grids) belonged to the upper level and included the vegetation class present in the lower level as a sub-object. Following this procedure, the area percentage occupied by the class "onion" within each pixel at Sentinel-2 and PlaneScope resolutions was calculated.

Results and Discussions
SAVI calculated at the native MS band's resolution of each sensor is shown in Figure 4. Considering the imagery of November, UAV SAVI's value ranged from 0 to 0.4, PlanetScope SAVI's value ranged from 0.15 to 0.5, while Sentinel-2 SAVI's value ranged from 0.15 to 0.8. In December, UAV SAVI's value ranged from 0 to 0.7, PlanetScope SAVI's value ranged from 0.3 to 0.9, and Sentinel-2 SAVI's value ranged between 0.15 and 1.
Lastly, in January, as far as the UAV was concerned, the range was similar to that of the previous month, while SAVI's value ranged between 0.15 and 0.8 and from 0.3 to 1 in PlanetScope and Sentinel-2, respectively.  The histograms reported in Figure 5 show the frequency distribution of SAVI values as a percentage of the total values and using the native resolution of each imagery. The histograms showed interesting differences between the three platforms used and differences also between the The histograms reported in Figure 5 show the frequency distribution of SAVI values as a percentage of the total values and using the native resolution of each imagery. The histograms showed interesting differences between the three platforms used and differences also between the datasets of the same platforms. In general, histograms showed a reduced range of values in UAV images when compared to the broader ranges of Planetscope and, especially, Sentinel-2. resolution (UAV) to those with the lowest (Sentinel-2). However, even though the satellite and the UAV maps had different index ranges, it was possible to see some similarities in the distribution of vigor in the onion field. The SAVI values of UAV and PlanetScope showed a high correlation, with values between 0.82 and 0.86 ( Figure 6). Similar correlations resulted from the comparisons with Sentinel-2 imagery.
This was highlighted by similarities in the localization of some areas of greater or lesser vigor of the field. This was evident by imagining to divide the image into two parts: in the upper part, there were areas of less vigor, while in the lower part, there were areas of the field with high vigor. Therefore, the satellites showed that they were capable of assessing the general conditions of the field. However, it is essential to remember that the heterogeneity of the surfaces analyzed in terms of land cover (rows, inter-rows, and paths) and the spatial resolution of Sentinel-2 imagery implies that a single pixel is made up for the most part of rows, inter-rows, and paths used for the passage of agricultural machines [76].
Evaluating the spectral resolution of the three platforms, taking into account the coefficient of variation (CV), there was a clear difference between the CV of UAV images and that of satellites, as shown in Table 2. Considering the UAV imagery, CV had a value of 62% in November, 70% in UAV SAVI average had values of 0.11, 0.14, and 0.19 in November, December, and January, respectively ( Table 2). As for PlanetScope images, the mean value of the SAVI was 0.27, 0.53, and 0.48 in November, December, and January, respectively. In Sentinel-2 images, the mean value of the SAVI index was 0.36 in November, 0.42 in December, and 0.59 in January. SAVI varied among different platforms, increasing its value from imagery with a higher resolution (UAV) to those with the lowest (Sentinel-2). However, even though the satellite and the UAV maps had different index ranges, it was possible to see some similarities in the distribution of vigor in the onion field. The SAVI values of UAV and PlanetScope showed a high correlation, with values between 0.82 and 0.86 ( Figure 6). Similar correlations resulted from the comparisons with Sentinel-2 imagery. Therefore, we obtained, on the one hand, crop pixels (i.e., SAVI onions) and, on the other hand, bare soil pixels observed in the inter-rows and the paths (i.e., SAVI soil). This allowed taking into account the presence of mixed spectral pixels, dependent on the spatial resolution [77], and more evident, considering the size of the pixels compared to the object of study, in PlanetScope and Sentinel-2 images.
Then, further correlation analyses were performed to analyze the ability of platforms to provide information on crop and soil. Observing the correlation between SAVI onions and PlaneScope (  This was highlighted by similarities in the localization of some areas of greater or lesser vigor of the field. This was evident by imagining to divide the image into two parts: in the upper part, there were areas of less vigor, while in the lower part, there were areas of the field with high vigor. Therefore, the satellites showed that they were capable of assessing the general conditions of the field. However, it is essential to remember that the heterogeneity of the surfaces analyzed in terms of land cover (rows, inter-rows, and paths) and the spatial resolution of Sentinel-2 imagery implies that a single pixel is made up for the most part of rows, inter-rows, and paths used for the passage of agricultural machines [76].
Evaluating the spectral resolution of the three platforms, taking into account the coefficient of variation (CV), there was a clear difference between the CV of UAV images and that of satellites, as shown in Table 2. Considering the UAV imagery, CV had a value of 62% in November, 70% in December, and 55% in January, while the CV in satellite imagery ranged between 24% and 35% in the three months/datasets. In general, there was an increase in the CV in low (satellite) to high resolution (UAV) imagery. However, the increase in CV was not accompanied by a greater range of SAVI values in Remote Sens. 2020, 12, 3424 13 of 27 UAV images compared to those of the satellites. This was also confirmed by higher standard deviation values in satellite imagery than those of the UAV.
The onion crop surveyed is a highly heterogeneous crop characterized by the alternation of plants (higher values), inter-rows, and bare soil of background (lower values). The 5-cm very high-resolution of UAV images detected the oscillation of these values, allowing a distinction between plants and soil. On the other hand, the discontinuity between plants and bare soil was not detected by the lower satellites' resolution that averages plants and bare soil reflectance values, therefore resulting in a narrow distribution.
Regarding the degree of correlation between pairs of SAVI maps based on Pearson's correlation coefficients, observing the coherence between SAVI maps of the UAV (resampled at 3 m) and PlaneScope satellite platforms (Figure 6), high correlations emerged in the three months with r index values of 0.84 in November, 0.86 in December, and 0.82 in January. A similar correlation was found when comparing Sentinel-2 and UAV in November (0.81). Higher values were that of December and January, 0.9 and 0.88, respectively.
By comparing PlanetScope images with those of Sentinel-2, the highest correlation values could be observed in each month, compared to previous correlations with 0.84, 0.94, and 0.92 in November, December, and January, respectively. Indeed, even comparing visually at their respective resolutions (Figure 4), this was highlighted by similarities in the localization of some areas of greater or lesser vigor of the field.
The results obtained from the correlations of the UAV images, resampled first at 3 m and then at 10 m, seemed to indicate a certain coherence between the information provided by the three platforms.
With the aim of obtaining a more comprehensive picture of the analyzed crop, the SAVI index was also calculated by classifying the onion crop and the soil separately, using the UAV imagery as reference.
Therefore, we obtained, on the one hand, crop pixels (i.e., SAVI onions) and, on the other hand, bare soil pixels observed in the inter-rows and the paths (i.e., SAVI soil). This allowed taking into account the presence of mixed spectral pixels, dependent on the spatial resolution [77], and more evident, considering the size of the pixels compared to the object of study, in PlanetScope and Sentinel-2 images.
Then, further correlation analyses were performed to analyze the ability of platforms to provide information on crop and soil. Observing the correlation between SAVI onions and PlaneScope (Figure 7), values were 0.61 in November, 0.84 in December, and 0.7 in January. The analysis of the correlation between SAVI onions and Sentinel-2 ( Figure 7) showed the following values: 0.63, 0.83, 0.77 in November, December, and January, respectively. The lower correlation in November values found with both satellites could be explained by a lower crop coverage compared to the soil, unlike December, where there was an increase in coverage.
Observing the correlation between SAVI soil and PlaneScope (Figure 8), values were 0.56 in November, 0.24 in December, and 0.28 in January. The analysis of the correlation between SAVI soil and Sentinel-2 ( Figure 8) showed similar following values: 0.55, 0.31, 0.25 in November, December, and January, respectively. The correlation values in December and January were quite similar, while the highest value was found in November. This probably confirmed what was said before, considering that in November, the bare soil was prevalent within the scene compared to the crop. The results obtained confirmed what was shown in Khaliq et al., (2019) [66]. In particular, satellite imagery showed some limitations, indirectly providing reliable information concerning the status of the crops where the crop radiometric information can be altered by the presence of other sources like the soil, in this case, which in the month of November is predominant. In the following months, lower correlation values were due to a smaller presence of bare soil, compared to parts of the field completely covered or sporadically covered by the crop. imagery showed some limitations, indirectly providing reliable information concerning the status of the crops where the crop radiometric information can be altered by the presence of other sources like the soil, in this case, which in the month of November is predominant. In the following months, lower correlation values were due to a smaller presence of bare soil, compared to parts of the field completely covered or sporadically covered by the crop.   imagery showed some limitations, indirectly providing reliable information concerning the status of the crops where the crop radiometric information can be altered by the presence of other sources like the soil, in this case, which in the month of November is predominant. In the following months, lower correlation values were due to a smaller presence of bare soil, compared to parts of the field completely covered or sporadically covered by the crop.   The aspect of the influence exerted by the different types of coverage on the pixel signal was related to spectral mixing pixels. This was a problem that concerned lower resolution images, i.e., those of PlanetScope and Sentinel-2. Using the onion class mask extracted from the UAV images, the percentage of area occupied by vegetation (onion) within the PlanetScope and Sentinel-2 pixels was calculated and is shown in Figure 9. The aspect of the influence exerted by the different types of coverage on the pixel signal was related to spectral mixing pixels. This was a problem that concerned lower resolution images, i.e., those of PlanetScope and Sentinel-2. Using the onion class mask extracted from the UAV images, the percentage of area occupied by vegetation (onion) within the PlanetScope and Sentinel-2 pixels was calculated and is shown in Figure 9.  Pixels in orange contained a percentage of the pixel area occupied by vegetation between 0 and 10% and could be assimilated to pure pixels of bare soil. On the other hand, pixels in dark green could be assimilated, with a percentage of the pixel area occupied by vegetation between 90 and 100%, to pure pixels of vegetation. The remaining pixels, colored with different shades of green, were mixed pixels.
A preponderant presence of orange pixels and, therefore, bare soil was easily visible in PlanetScope and Sentinel-2's maps of November.
On the other hand, pure pixels of vegetation were mostly present in the maps of the following months, as a natural consequence of the course of the cultivation cycle. During these months, when the crop was regularly growing and the underlying soil cover capacity was improved, there were many pure pixels of vegetation. This happened especially in PlanetScope images, whose pixels covered an area of 9 m 2 each. After all, the smaller the pixel size, the less likely it was that a pixel contained more coverage types. Fewer pure pixels were present in Sentinel-2 images, whose pixels had a size of 10 m × 10 m.
A correlation analysis between SAVI values and the percentage of area covered by onion crop within PlanetScope's and Sentinel-2's pixels was performed in order to deepen the aspect related to the presence of mixed pixels ( Figure 10). Regarding PlanetScope images, the highest value was that of November with 0.86, probably explained by a predominant presence of pure pixels of bare soil. In the following months, the value obtained was 0.7; fewer pixels of bare soil were present, but the pure pixels of vegetation increased. This trend was similar in the correlation between Sentinel-2 images, but the values were lower: 0.74 in November, 0.6 in December, and 0.59 in January. In these images, the problem of mixed pixels was more pronounced.
Remote Sens. 2020, 12, x FOR PEER REVIEW 16 of 27 Pixels in orange contained a percentage of the pixel area occupied by vegetation between 0 and 10% and could be assimilated to pure pixels of bare soil. On the other hand, pixels in dark green could be assimilated, with a percentage of the pixel area occupied by vegetation between 90 and 100%, to pure pixels of vegetation. The remaining pixels, colored with different shades of green, were mixed pixels. A preponderant presence of orange pixels and, therefore, bare soil was easily visible in PlanetScope and Sentinel-2's maps of November.
On the other hand, pure pixels of vegetation were mostly present in the maps of the following months, as a natural consequence of the course of the cultivation cycle. During these months, when the crop was regularly growing and the underlying soil cover capacity was improved, there were many pure pixels of vegetation. This happened especially in PlanetScope images, whose pixels covered an area of 9 m 2 each. After all, the smaller the pixel size, the less likely it was that a pixel contained more coverage types. Fewer pure pixels were present in Sentinel-2 images, whose pixels had a size of 10 m x 10 m.
A correlation analysis between SAVI values and the percentage of area covered by onion crop within PlanetScope's and Sentinel-2's pixels was performed in order to deepen the aspect related to the presence of mixed pixels ( Figure 10). Regarding PlanetScope images, the highest value was that of November with 0.86, probably explained by a predominant presence of pure pixels of bare soil. In the following months, the value obtained was 0.7; fewer pixels of bare soil were present, but the pure pixels of vegetation increased. This trend was similar in the correlation between Sentinel-2 images, but the values were lower: 0.74 in November, 0.6 in December, and 0.59 in January. In these images, the problem of mixed pixels was more pronounced. Finally, we produced the SAVI maps using the images surveyed of the considered three months and for all the three platforms ( Figure 11). With this aim, the UAV and PlanetScope maps were resampled at Sentinel-2's 10 m geometrical resolution.
Looking at the map, the main effect of resampling UAV images was evident: the impossibility of distinguishing the details, which permit to discriminate among the crop, the soil, and the interrows. The resampling of the UAV images to a coarser spatial resolution, resulting in fewer pixels, had the loss of information related to the different SAVI values of rows, inter-rows, and paths as its main visible consequence. Indeed, the upscaling of spatial resolution has the consequence of erasing Finally, we produced the SAVI maps using the images surveyed of the considered three months and for all the three platforms ( Figure 11). With this aim, the UAV and PlanetScope maps were resampled at Sentinel-2's 10 m geometrical resolution.
Looking at the map, the main effect of resampling UAV images was evident: the impossibility of distinguishing the details, which permit to discriminate among the crop, the soil, and the inter-rows. The resampling of the UAV images to a coarser spatial resolution, resulting in fewer pixels, had the loss of information related to the different SAVI values of rows, inter-rows, and paths as its main visible consequence. Indeed, the upscaling of spatial resolution has the consequence of erasing the details of the original data [78]. Increasing pixel size determines the decreasing of spatial variability of a vegetation index, as shown in Tarnavsky et al., (2008) [79]. Besides, radiometric resolution influenced VIs' dynamic range. Indeed, observing SAVI in Figure 11, differences between platforms in the range of the index values appeared immediately evident. What was evident at first glance was the difference between the SAVI values of the UAV images when compared to the images of the two satellites, as a result of lower spectral variability in UAV images.
Remote Sens. 2020, 12, x FOR PEER REVIEW 17 of 27 the details of the original data [78]. Increasing pixel size determines the decreasing of spatial variability of a vegetation index, as shown in Tarnavsky et al., (2008) [79]. Besides, radiometric resolution influenced VIs' dynamic range. Indeed, observing SAVI in Figure 11, differences between platforms in the range of the index values appeared immediately evident. What was evident at first glance was the difference between the SAVI values of the UAV images when compared to the images of the two satellites, as a result of lower spectral variability in UAV images. As far as UAV images were concerned, the lowest values were close to 0, while the highest values were 0.3 in November and December and 0.4 in January. Looking at the PlanetScope images instead, the highest values reached by the SAVI were 0.5, 0.9, and 0.8 in November, December, and January, respectively. In the Sentinel-2 images, there were minimum values higher than those of the other platforms, between 0.2 (from November) and 0.3 (in January). The maximum values reached by the SAVI were 0.7, 0.9, and 1 in November, December, and January, respectively. One important aspect must be stressed. The trend of the SAVI, proceeding from November, two months after the transplantation, until January, period close to the onion harvest, was the same regardless of the platform used. The same trend was confirmed by comparing the bands of the SAVI highlighted in As far as UAV images were concerned, the lowest values were close to 0, while the highest values were 0.3 in November and December and 0.4 in January. Looking at the PlanetScope images instead, the highest values reached by the SAVI were 0.5, 0.9, and 0.8 in November, December, and January, respectively. In the Sentinel-2 images, there were minimum values higher than those of the other platforms, between 0.2 (from November) and 0.3 (in January). The maximum values reached by the SAVI were 0.7, 0.9, and 1 in November, December, and January, respectively. One important aspect must be stressed. The trend of the SAVI, proceeding from November, two months after the transplantation, until January, period close to the onion harvest, was the same regardless of the platform used. The same trend was confirmed by comparing the bands of the SAVI highlighted in the spectral signatures derived from pure pixels of onion in the three periods (see Figure A1 of the Appendix). The SAVI increased progressively from November to December and January. This aspect appeared less apparent when looking at the UAV images resampled due to the loss of information. However, the SAVI calculated on the lower resolution images of the satellites had higher values from the dominant green color of the relevant vigor maps. While it is not clear how differences in spatial resolution affect VIs values under field conditions, some studies have demonstrated this effect by comparing different satellites [80][81][82][83][84][85], showing that VIs values are higher in coarser spatial resolution images. Factors responsible for inter-sensor VIs variations can be several [85]. Firstly, the calibration procedure may cause inter-sensor SAVI variations. Calibration provides precision and correctness to the data derived from a sensor so that all the datasets obtained from the same sensor can be compared. Algorithms used for calibration, including those useful for radiometric correction, are different between one sensor and another. These uncertainties remain when VIs produced by different sensors are compared [86,87]. However, as far as the calibration aspect is concerned, Sentinel-2's images are better than PlanetScope [88]. Obviously, the technological differences between the sensors used by the UAVs and those present on the satellites cannot be ignored. Other variations can be due to the lack of bandwidth correspondence, as shown in Gallo and Daughtry (1987) [89] and Teillet et al., (1997) [90]. Besides, the differences in the spatial and radiometric resolution of the several sensors must also be taken into account [85]. Therefore, since there are several factors responsible for the differences between the sensors concerning the values of the vegetation indexes, it must be taken into account that these differences are not necessarily attributable to a single factor. It is rather prudent to consider all the cumulative effects of factors on VIs [85].
In addition to the maps with 10 m resolution, another was made, including the SAVI computed only on the onion crop. This was done in order to evaluate the contribution made by the UAV images. As shown, the UAV images proved useful for a separation between vegetation and soil due to obvious limitations (related to the size of individual plants) due to the spatial resolution of the satellites. The maps were produced using the masks produced on eCognition, already shown in Figure 12.
Remote Sens. 2020, 12, x FOR PEER REVIEW 18 of 27 the spectral signatures derived from pure pixels of onion in the three periods (see Figure A1 of the Appendix). The SAVI increased progressively from November to December and January. This aspect appeared less apparent when looking at the UAV images resampled due to the loss of information. However, the SAVI calculated on the lower resolution images of the satellites had higher values from the dominant green color of the relevant vigor maps. While it is not clear how differences in spatial resolution affect VIs values under field conditions, some studies have demonstrated this effect by comparing different satellites [80][81][82][83][84][85], showing that VIs values are higher in coarser spatial resolution images. Factors responsible for inter-sensor VIs variations can be several [85]. Firstly, the calibration procedure may cause inter-sensor SAVI variations. Calibration provides precision and correctness to the data derived from a sensor so that all the datasets obtained from the same sensor can be compared. Algorithms used for calibration, including those useful for radiometric correction, are different between one sensor and another. These uncertainties remain when VIs produced by different sensors are compared [86,87]. However, as far as the calibration aspect is concerned, Sentinel-2's images are better than PlanetScope [88]. Obviously, the technological differences between the sensors used by the UAVs and those present on the satellites cannot be ignored. Other variations can be due to the lack of bandwidth correspondence, as shown in Gallo and Daughtry (1987) [89] and Teillet et al., (1997) [90]. Besides, the differences in the spatial and radiometric resolution of the several sensors must also be taken into account [85]. Therefore, since there are several factors responsible for the differences between the sensors concerning the values of the vegetation indexes, it must be taken into account that these differences are not necessarily attributable to a single factor. It is rather prudent to consider all the cumulative effects of factors on VIs [85].
In addition to the maps with 10 m resolution, another was made, including the SAVI computed only on the onion crop. This was done in order to evaluate the contribution made by the UAV images. As shown, the UAV images proved useful for a separation between vegetation and soil due to obvious limitations (related to the size of individual plants) due to the spatial resolution of the satellites. The maps were produced using the masks produced on eCognition, already shown in Figure 12.  In particular, the masks obtained by exporting a vector file containing only the class "onions" were applied to the UAV images at their native resolution with the aim to obtain only parts of orthomosaics concerning onion crop. As a result, parts of the scene occupied by soil were excluded from the onion's map.
Observing SAVI maps (Figure 13), applied to UAV images, considering only the part of the imagery occupied by onion crop, the index for three months surveyed was between 0 and 0.9.
Remote Sens. 2020, 12, x FOR PEER REVIEW 19 of 27 In particular, the masks obtained by exporting a vector file containing only the class "onions" were applied to the UAV images at their native resolution with the aim to obtain only parts of orthomosaics concerning onion crop. As a result, parts of the scene occupied by soil were excluded from the onion's map.
Observing SAVI maps (Figure 13), applied to UAV images, considering only the part of the imagery occupied by onion crop, the index for three months surveyed was between 0 and 0.9.  The values for November were the lowest. This could be since the crop was still in the early stages of the cultivation cycle. Besides, during the segmentation phase, the software was not always able to correctly separate the vegetation from the background. It was conceivable that the lower values in the map could be traced back to the underlying terrain. Considering the map of December, the values were higher than the previous month. In the portion of the field where the transplant took place in mid-September, the index values were lower and were between 0.15 and 0.45. There were also evident areas where vegetation was challenging to grow, as shown in   [52]. In the portion of the field in an advanced stage of cultivation, the values were higher. In particular, the values were between 0.45 and 0.9. The contrast of colors between the two areas of the field with different transplanting times was evident. The January map showed an increase in SAVI values where the crop was at a near harvest stage.
Vigor maps produced were useful to investigate areas of the field where the crop was struggling to grow, providing the farmer with potentially helpful information. Indeed, as shown in Figure 13, we could well distinguish areas (white, in the absence of data) where the crop seemed not to be present. In November, this area was higher than in the following months. Being in the month of November at the initial stages of growth, many plants, probably still too small, were not identified by the classification software, and also given the structure of the epigeal part of the onion, especially in this phase, would have required a resolution equal to or higher than 4 cm. In fact, in the month of December, many of the voids present in November were filled. However, some areas of the field, where the crop remained absent, were circled in black. These voids persisted in the month of January and were probably indicative of a problem of stunted growth due to several possible causes. These causes might be attributable to the action of abiotic agents, such as water stagnation, nutrient deficiency, or maybe the presence of a disease. In essence, beyond the values assumed by the index, the UAV had risen usefully and was able to identify the individual plants. Where this had not happened, we found voids. These voids could be explained either by the complete absence of the plants or by plants that were poorly cultivated or have had difficulty growing. So, this vigor map indirectly provides an indication to the farmer. This information could be used, for example, for localized fertilization or the grubbing up of diseased plants.
When comparing the strength maps of the different platforms, the satellites, considering the altitude in which they are located, provide images characterized by a coarser resolution but applicable for monitoring large areas and still able to recognize the variation in vegetation growth and health crop status [91]. In addition, as shown in this study, they are often characterized by a higher spectral variability and greater temporal and spatial reliability in the range of values assumed by the index, also taking into account a consolidated (in the case of Sentinel-2) calibration method. In addition, the SWIR band with which Sentinel-2 is equipped allows the calculation of other indices useful for monitoring, such as the normalized difference moisture index (NDMI) [92].
On the other hand, low altitude RS by using UAV is confirmed to be a useful tool in PA. In PA, considering agricultural monitoring, repeatable and timely information on variability within the field has a specific utility [3,93] as it allows to optimize production efficiency through sustainable and spatially explicit management practices [94,95]. In the present case study, some details of Figure 13 were made evident only by the vigor map derived from the UAV images. This is evident, given the inability of satellites to discriminate against specific details of the field, such as inter-row paths, as also shown in [49,66,96], which implies that the value of a VI within a pixel is necessarily derived from the average of the crop and inter-row information. It is also necessary to take into account the limits, due to their spatial resolution, which prevent satellites from highlighting problems located in areas whose area is less than the minimum identifiable by them.

Conclusions
This article dealt with a comparison between images of onion crops derived from three different platforms-a UAV and two satellites-one free medium resolution platform and the other low-cost, high-resolution platform. The comparison was mainly based on the analysis of the spatial resolution differences and the effects they may have on data quality in a PA context. For this reason, vigor maps were generated using the SAVI index and resampled at the lowest resolution of the satellite Sentinel-2.
Regarding the comparison between UAVs and satellites, the introduction of relatively new platforms, including nano-satellites, equipped with sensors that provide high or ultra-high resolution images, less than 3 m and 1 m, respectively, makes satellites increasingly competitive with UAVs in PA applications. The feature that makes the latter unique is that they can mount several types of sensors simultaneously [13]. Otherwise, considering all the platforms available, there is probably not yet one that can provide high spectral, spatial, and temporal resolution images [97]. Actually, simultaneous requirements for ultra-high spatial resolution images (<10 m) with almost daily time resolution can only be met by targeted acquisition via commercial programmable multi-sensor systems, such as WorldView, with an MS resolution of about 1 m. At the same time, Sentinel-2 is currently the finest resolution MS imaging mission in open source image data. In the case study, taking into account the characteristics of the crop, a resolution of less than or equal to one meter was preferable for more accurate data collection.
This article confirmed the results of other studies that have highlighted the role of high-resolution satellites in crop monitoring on a large scale. On the other hand, some limitations and uncertainties emerged in this case study where there is a need to discriminate localized conditions of inhomogeneity in the field, determined by abiotic or biotic stresses. This can be important in order to plan remedial interventions, such as localized application of pesticides, herbicides, and fertilizers. In this case, the images provided by UAVs made the difference, proving to be useful in guiding agronomic localized operations, such as fertilization and phytosanitary treatments. In the present case study, the monitoring regarded a crop, such as onion, characterized mainly in the early stages of the crop cycle by the small size and a non-homogeneous soil cover capacity. It is necessary to specify that better results in monitoring with UAVs could be obtained with higher resolution images than the one used; therefore, below 4 cm. As for a more accurate comparison of the quality of the data provided on the vegetation index values, it would be interesting to make a further comparison, in the same context, by including higher-priced cameras. Considering the overall results of the comparison carried out, it emerges that the contribution made by each platform must be regarded as complementary to that made by the other and not sufficient by itself in the accurate monitoring of the crop under study. This is true in light of the limitations shown by each platform.
Considering contexts similar to the one presented, the frequent use of the UAV for weekly monitoring could be uncomfortable and expensive if executed in several fields, perhaps not too large and spaced out from each other. In this regard, in these cases, it would be easier to use satellite images to check the general conditions of the field, interspersed with the use of more detailed UAV images at critical moments in the crop cycle. The advisable solution is not the use and preference of one platform over another. Therefore, a combination of different platforms, taking into account the level of information quality that each one can give, is desirable when the proper technical knowledge is available. In order to overcome the limitations of all the platforms described above, it would be desirable to combine UAV images (preferable with higher than 4 cm of resolution) with high-resolution satellite images to improve the overall quality of the final products. To be able to deepen the aspect related to the comparison between the different platforms, it would also be interesting to test the proposed approach on other crops.