Next Article in Journal
A Novel Object-Based Supervised Classification Method with Active Learning and Random Forest for PolSAR Imagery
Next Article in Special Issue
Recent Progress and Developments in Imaging Spectroscopy
Previous Article in Journal
Satellite Remote Sensing Analysis of the Qasrawet Archaeological Site in North Sinai
Previous Article in Special Issue
Mapping Asphaltic Roads’ Skid Resistance Using Imaging Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows

1
Crop Science Group, Institute of Agricultural Sciences, ETH Zurich, 8092 Zurich, Switzerland
2
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute, National Land Survey of Finland, Geodeetinrinne 2, 02431 Masala, Finland
3
Discipline of Geography and Spatial Sciences, School of Technology, Environments and Design, College of Sciences and Engineering, University of Tasmania, Private Bag 76, Hobart 7005, Australia
4
European Commission (EC), Joint Research Centre (JRC), Directorate D—Sustainable Resources, Via E. Fermi 2749—TP 261, 26a/043, I-21027 Ispra, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(7), 1091; https://doi.org/10.3390/rs10071091
Submission received: 25 May 2018 / Revised: 18 June 2018 / Accepted: 30 June 2018 / Published: 9 July 2018
(This article belongs to the Special Issue Recent Progress and Developments in Imaging Spectroscopy)

Abstract

:
In the last 10 years, development in robotics, computer vision, and sensor technology has provided new spectral remote sensing tools to capture unprecedented ultra-high spatial and high spectral resolution with unmanned aerial vehicles (UAVs). This development has led to a revolution in geospatial data collection in which not only few specialist data providers collect and deliver remotely sensed data, but a whole diverse community is potentially able to gather geospatial data that fit their needs. However, the diversification of sensing systems and user applications challenges the common application of good practice procedures that ensure the quality of the data. This challenge can only be met by establishing and communicating common procedures that have had demonstrated success in scientific experiments and operational demonstrations. In this review, we evaluate the state-of-the-art methods in UAV spectral remote sensing and discuss sensor technology, measurement procedures, geometric processing, and radiometric calibration based on the literature and more than a decade of experimentation. We follow the ‘journey’ of the reflected energy from the particle in the environment to its representation as a pixel in a 2D or 2.5D map, or 3D spectral point cloud. Additionally, we reflect on the current revolution in remote sensing, and identify trends, potential opportunities, and limitations.

1. Introduction

Over the past decade, the number of applications of unmanned aerial vehicles (UAVs, also referred to as drones, unmanned aerial/aircraft systems (UAS), or remotely piloted aircraft systems (RPAS) has exploded. Already in 2008, unmanned robots were envisioned to bring about a new era in agriculture [1]. Recent studies have shown that UAV remote sensing techniques are revolutionizing forest studies [2], spatial ecology [3], ecohydrology [4,5] and other environmental monitoring applications [6]. The main driver for this revolution is the fast pace of technological advances and the miniaturization of sensors, airframes, and software [7]. A wide range of UAV platforms and sensors have been developed in the last decade. They have given individual scientists, small teams, and the commercial sector the opportunity to repeatedly obtain low-cost imagery at ultra-high spatial resolutions (1 cm to 1 m) that have been tailored to specific areas, products, and delivery times [3,8,9,10,11]. Moreover, computing power and easy to use consumer-grade software packages, which include modern computer vision and photogrammetry algorithms such as structure from motion (SfM) [12], are becoming cheaper and available to many users.
Before the era of UAVs, the majority of spectral datasets was produced by external data suppliers (companies or institutions) in a standardized way using a few types of sensors on-board satellites and manned aircraft. Today, research teams own or even build their own sensing systems and process their data themselves without the need for external data suppliers. Technology is developing rapidly and offering new types of sensors. This diversification makes data quality assurance considerations even more critical—in particular for quantitative and spectral remote sensing approaches, given the complexity of the geometric and radiometric corrections required for accurate spectroscopy-focused environmental remote sensing.
Spectral remote sensing gathers information by measuring the radiance emitted (e.g., in the case of chlorophyll fluorescence), reflected, and transmitted from particles, objects, or surfaces. However, this information is influenced by environmental conditions (mainly the illumination conditions) and modified by the sensor, measurement protocol, and the data-processing procedure. Thus, it is critical to understand the full sensing process, since undesired effects during data acquisition and processing may have a significant impact on the confidence of decisions made using the data [13]. Moreover, it is also a prerequisite to later use pixels to understand the biological processes of the Earth system (c.f. [14]).
Recently, several papers have reviewed the literature for UAV technology and its application in Earth observation [7,8,9,10,15,16,17]. With the issues potentially arising from an increasing diversity of small spectral sensors for UAV remote sensing, there is also a growing need to spread knowledge on sensor technology, data acquisition, protocols, and data processing. Thus, the objective of this review is to describe and discuss recent spectral UAV sensing technology, its integration on UAV platforms, and geometric and radiometric data-processing procedures for spectral data captured by UAV sensing systems based on the literature, but also on more than a decade of our own experiences. Our aim is to follow the signal through the sensing process (Figure 1) and discuss important steps to acquire reliable data with UAV spectral sensing. Additionally, we reflect on the current revolution in remote sensing to identify trends and potentials.
This review is structured as follows. Different technical solutions for spectral UAV sensor technology are described in Section 2. The geometric and radiometric processing steps are elaborated in Section 3 and Section 4, respectively. In Section 5, we build a more complete picture of significant development steps and present the recommended best practices. The conclusions in Section 6 complete this review.

2. Spectral UAV Sensors

Spectral sensing can be performed using different approaches. Since in most countries the threshold to fly UAVs without permission is below a take-off weight of 30 kg, this paper will focus on sensors that can be carried by these UAVs. Generally, spectral sensors capture information in spectrally and radiometrically characterized bands. Commonly, they are distinguished by the arrangement and/or a number of bands [18,19]. Furthermore, sensors can be classified on the basis of the method by which they achieve spatial discrimination and the method by which they achieve spectral discrimination [20]. In the following subsections, we introduce several types of sensors that are available for UAVs with some example applications. Table 1 at the end of Section 2 summarizes the different sensor types.

2.1. Point Spectrometers

Point spectrometers (referred to as spectroradiometers if they are spectrally and radiometrically calibrated) capture individual spectral signatures of objects. The spectrometer’s field of view (FOV) and the distance to the object define the footprint of the measurement.
Already in 2008, point spectrometers were mounted on flying platforms (e.g., [21]). Recently, spectrometers have been miniaturized such that their size and the payload capacity of UAVs converged. In 2014, Burkart et al. [22] developed an ultra-lightweight UAV spectrometer system based on the compact Ocean Optics STS [23] for field spectroscopy. It recorded spectral information in the wavelength range of 338 nm to 824 nm, with a full width at half maximum (FWHM) of 3 nm and a FOV of 12°, and later it was used as a flying goniometer to measure the bidirectional reflectance distribution function of vegetation [24]. Garzonio et al. [25] presented a system to measure sun-induced fluorescence in the oxygen A absorption band with the high-spectral resolution spectrometer Ocean optics USB4000 with an FWHM of 1.5 nm and a FOV of 6°. Furthermore, UAV point spectrometers have been used to investigate the impact of environmental variables on water reflectance [26] and for intercomparison with data from a moderate resolution imaging spectroradiometer (MODIS) in Greenland [27]. Besides, UAV point measurements have been fused with multispectral 2D imaging sensors [28] and applied from fixed-wing UAVs [29], e.g., to monitor phytoplankton bloom [30]. Additionally, first attempts have been made to build low-cost whiskbroom systems for UAVs, which are able to quickly scan points on the surface [31].
The benefits of point spectrometers are their high spectral resolution, high dynamic range, high signal-to-noise ratio (SNR), and low weight. The USB4000 weighs approximately 190 g, and the STS spectrometer only weighs about 60 g. In combination with a microcontroller, the total ready-to-fly payload with the STS is only 216 g [22], allowing their installation on very small UAVs. However, since their data contains no spatial reference, auxiliary data is required for georeferencing; this makes additional devices necessary, increasing the overall weight and cost of the system (c.f. Section 3.1). In addition, the data within the FOV of the spectrometer cannot be spatially resolved; i.e., each spectral measurement includes spectral information from objects within the FOV of the sensor.

2.2. Pushbroom Spectrometers

The pushbroom sensor records a line of spectral information at each exposure. By repeating the recording of individual lines during flight, a continuous spectral image over the object is obtained. Each pixel represents the spectral signature of the objects within its instantaneous FOV (IFOV) [32]. The pushbroom design has been the ‘standard’ design for large airborne imaging spectrometers for many years; however, miniaturization for small UAVs has only recently been achieved.
Zarco-Tejada et al. [33] demonstrated the use of narrow-band indices acquired from a UAV platform for water stress detection in an orchard using a Headwall micro-Hyperspec VNIR pushbroom scanner [34] with a FWHM of 3.2 nm or 6.4 nm, depending on the slit used. In addition, pushbroom scanners have been used to detect plant diseases [35], estimate gross primary production (GPP) by means of physiological vegetation indices [36], and retrieve chlorophyll fluorescence by means of the Fraunhofer line depth method with three narrow spectral bands around the O2-A absorption feature at 760 nm [35,36]. Pushbroom sensors have also been flown together with light detection and ranging (LiDAR) systems to fuse spectral and 3D data [37]. Most of these studies were carried out with fixed-wing UAVs at flying altitudes of 330–575 m above ground level (AGL), resulting in a ground sampling distance (GSD) of 0.3–0.4 m. Pushbroom systems have also been mounted on multi-rotor UAVs, flying lower and slower, resulting in ultra-high ground sampling distances (<10 cm pixel size). Lucieer et al. [38] and Malenovsky et al. [39] achieved resolutions of up to 4 cm to map the health of Antarctic moss beds.
State-of-the-art pushbroom sensors for UAVs weigh between 0.5–4 kg (typically ~1 kg) including the lens. However, due to the large amount of data captured by such a sensor, a mini-computer with considerable storage capacity has to be flown together with the sensor, which increases the payload (Section 3.2). Suomalainen et al. [40] built a hyperspectral mapping system based on off-the-shelf components. Their total system included a pushbroom spectrometer, global navigation satellite system (GNSS) receiver and inertial measurement unit (IMU), a red–green–blue (RGB) camera, and a Raspberry PI to control the system and acting as a data sink. The total weight of the system was 2 kg [40]. Recently, companies that formerly focused on full-size aircraft systems have also attempted to miniaturize their sensors for UAVs and provide self-contained systems including the sensor, GNSS/IMU, and control and storage devices (e.g., [41]).
The benefits of pushbroom devices include their high spatial and spectral resolution, and the ability to provide spatially resolved images. Compared to the point spectrometers, they are heavier and require more powerful on-board computers. Similar to point spectrometers, pushbroom systems need additional equipment on-board the UAV to enable accurate georeferencing of the data (c.f. Section 3.2).

2.3. Spectral 2D Imagers

2D spectral imagers record spectral data in two spatial dimensions within every exposure. This has opened up new ways of imaging spectroscopy [42], since computer vision algorithms can be used to compose a scene from individual images, and spectral and 3D information can be retrieved from the same data and composed to (hyper)spectral digital surface models [43,44]. Since [43] first attempted to categorize 2D imagers (then commonly referred to as image-frame cameras or central perspective images [32]), new technologies have appeared. Today, 2D imagers exist that record the spectral bands sequentially or altogether within a snapshot. In addition, multi-camera systems record spectral bands synchronously with several cameras. In the following sections, these different technologies are reviewed.

2.3.1. Multi-Camera 2D Imagers

A multi-camera 2D imager uses several integrated cameras to record a multispectral or hyperspectral image. Often, this is done by placing filters with a specific wavelength configuration in front of the detector. The first popular camera for this type for UAV application was the MCA, which had four or six cameras. Some examples of applications that have been developed with this first bulky model (2.7 kg) carried out on-board a helicopter UAV include water stress detection and precision agriculture studies [45,46,47]. Further miniaturization of the MCA model into the mini-MCA camera enabled its use from lightweight platforms for vegetation detection in herbaceous crops [48] and weed mapping [49]. However, due to its technical configuration, calibration and post-processing of data was complex [50]. Additionally, the camera had a rolling shutter, where not all parts of the image are recorded at the same time. For moving scenes (e.g., due to the movement of the sensor), this results in “rolling-shutter” effects that distort the images. Thus, rolling shutter cameras are not suited for taking images during UAV movement. Newer Tetracam cameras now also use global shutter in the Macaw model.
Recently, similar but more compact systems have appeared on the market. Among them are the MicaSense Parrot Sequoia and RedEdge(-m) [51,52] with four and five spectral bands (blue, green, red, red edge, near-infrared), and the MAIA camera [53], with nine bands captured by separate imaging sensors that operate simultaneously. Such cameras were used to assess forest health [54], leaf area index in mangrove forests [55] and grapevine disease infestation [56]. In addition, self-built multi-camera spectral 2D systems have been used to identify water stress [57] as well as crop biomass and nitrogen content [58].

2.3.2. Sequential 2D Imagers

Sequential band systems record bands or sets of bands sequentially in time, with a time lag between two consecutive spectral bands. These systems have often been called image-frame sensors [43,44,59,60]. An example of such a system is the Rikola hyperspectral imager by Senop Oy [61] that is based on the tunable Fabry–Pérot Interferometer (FPI). The camera weighs 720 g. The desired spectral bands are obtained by scanning the spectral range with different air gap values within the FPI [44,62]. The current commercial camera has approximately 1010 × 1010 pixels, providing a vertical and horizontal FOV of 36.5° [61,63]. In total, 380 spectral bands can be selected with a 1-nm spectral step in the spectral range of approximately 500–900 nm, but in typical UAV operation, 50–100 bands are collected. The FWHM increases with the wavelength if the order of interference and the reflectance of the mirrors remain the same [64]. However, in practical implementation of the Rikola camera, the resulting FWHMs are similar in the visible and near-infrared (NIR) ranges: approximately 5–12 nm. The Rikola HSI records up to 32 individual bands within a second; so, for example, a hypercube with 60 freely selectable individual bands can be captured within a 2-s interval. Recently, a sort-wave infrared (SWIR) range prototype camera was developed, with a spectral range of 0.9–1.7 μm and an image size of 320 × 256 pixels [65,66].
Benefits of the sequential 2D imagers are the comparably high spatial resolution and the flexibility to choose spectral bands. At the same time, the more bands that are chosen, the longer it takes to record all of them. In mobile applications, the bands in individual cubes have spatial offsets that need to be corrected in post-processing (c.f. Section 3.3.2) [44,67,68]. The frame rate, exposure time, number of bands, and flying height limit the flight speed in tunable filter-based systems. The FPI cameras have been used in various environmental remote sensing studies, including precision agriculture [44,69,70,71], peat production area moisture monitoring [65], tree species classification, forest stand parameter estimation, biodiversity assessment [72,73,74], mineral exploration [68], and detection of insect damage in forests [60,75].

2.3.3. Snapshot 2D Imagers

Snapshot systems record all of the bands at the same time [43,76,77], which has the advantage that no spatial co-registration needs to be carried out [43]. Currently, multi-point and filter-on-chip snapshot systems exist for UAVs.

Multi-point spectrometer

Multi-point spectrometers use a beam splitter to divide the 2D image into sections, of which the signal is spread in the spectral domain [78]. An example is the Cubert Firefleye [79]. The camera and a controlling computer weighs about 1 kg and records an image cube with 50 × 50 pixel of spectral data from 450–900 nm with an FWHM of 5 nm (460 nm) to 25 nm (860 nm). Simultaneously, a one megapixel grey image with the same extent is taken and can be used to collate the images into a full scene [43]. Multi-point snapshot cameras have been used to derive chlorophyll [42], plant height [43] and leaf area index [80] in crops.
The advantage of multi-point 2D imagers is that they record all of the spectral information for each point in the image at the same time, and typical integration times are very short due to the high light throughput. However, the disadvantage is the relatively low spatial resolution of the spectral information.

Mosaic filter-on-chip cameras

In the mosaic filter-on-chip technology, each pixel carries a spectral filter that has a certain transmission, such as the principle of a Bayer pattern in an RGB camera. The combined information of the pixels within a mosaic or tiles then represents the spectral information of the area seen by the tile. The technology is based on a thin wafer on top of a monochromatic complementary metal–oxide semiconductor (CMOS) sensor in which the wafer contains band pass filters that isolate spectral wavelengths according to the Fabry–Pérot interference principle [81,82]. The wafers are produced in a range of spatial configurations, including linear, mosaic, and tile-based filters. This technique was developed by Imec using the FPI filters to provide different spectral bands [81]. Currently, the chip is available for the range of 470–630 nm in 16 (4 × 4 pattern) bands with a spatial resolution of 512 × 256 pixels, and in the range of 600–1000 nm in 25 bands (5 × 5 pattern) with a spatial resolution of 409 × 216 pixels. The FWHM is below 15 nm for both systems. The two chips are integrated into cameras by several companies, and weigh below 400 g (e.g., [83,84]). So far, only first attempts with this new technology have been published [85,86], including a study on how to optimize the demosaicing of the images [87]. Recently, Imec has announced a SWIR version of the camera [88].
The advantage of filter-on-chip cameras is that they record all of the bands at the same time. Additionally, they are very light, and can be carried by small UAVs. The disadvantage is that each band is just measured once within each tile and thus, accurate spectral information for one band is only available once every few pixels. Currently, this is tackled by slightly defocusing the camera and interpolation techniques. They have a higher spatial resolution than multi-point spectrometers, but the radiometric performance of the filter-on-chip technology has not yet reached the quality of established sensing principles that is used in point and line scanning devices. This mainly results from the technical challenges during the manufacturing process (i.e., strong variation of the thickness of the filter elements between adjacent pixels) and the novelty of the technique. Figure 2 shows images captured by a sequential 2D imager, a multi-point spectrometer, and a filter-on-chip snapshot camera.

Spatiospectral filter-on-chip cameras

To address the latter, a modified version of the filter-on-chip camera called COSI Cam has been developed [89]. This sensor no longer uses a small number of spectral filters in a tiled or pixel-wise mosaic arrangement. Instead, a larger number of narrow band filters are used, which are sampled densely enough to have continuous spectral sampling. The filters are arranged in a line-wise fashion, with a n amount of lines (a small number, five or eight) of the same filter next to each other, followed by n lines of spectrally adjacent filter bands. In this arrangement, filters on adjacent pixels only vary slightly in thickness, leading to much cleaner spectral responses than the 4 × 4 and 5 × 5 pattern. The COSI Cam prototype [89] was the first camera using such a chip, capturing more than 100 spectral bands in the range of 600 nm–900 nm. In a further development, by using two types of filter material on the chip, a larger spectral range of 475 nm–925 nm was achieved (ButterflEYE LS; [90]) with a spectral sampling of less than 2.5 nm.
Physically, these cameras are filter-on-chip cameras, but their filter arrangement requires a different mode of operation. Their operation includes scanning over an area, and is similar to the operation of a pushbroom camera. Therefore, these sensors are also referred to as spatiospectral scanners. The 2D sensor can be seen as a large array of 1D sensors, each capturing a different spectral band (in fact, n duplicate lines per band). To capture all of the spectral bands at every location, a new image has to be captured every time the platform has moved the equivalent of n lines. This is achieved by limiting the flying speed and operating the camera with a high frame rate (typically 30 frames per second (fps)), which means a larger portion of the flying time is used for collecting information. A specialized processing workflow then generates the full image cube for the scene [90].
Due to their improved design, the radiometric quality of the spectrospatial cameras is better compared to the classical filter-on-chip design. At the same time, their data enables the reconstruction of the 3D geometry similar to other 2D imagers due to the 2D spatial information within the images. Sima et al. [91] showed that a good spatial co-registration can be achieved that also allows extraction of digital surface models (DSMs). The drawback of these systems is that they require large storage capacity and a lower flying speed to obtain full coverage over the target of interest. A further challenge in this kind of sensor is that each band has different anisotropy effects as a result of having different view angles to the object.

Characterized (modified) RGB cameras

RGB and modified RGB cameras (e.g., where the infrared filter is removed, and so called color-infrared cameras (CIR) with green, red and near-infrared bands) can also be used to capture spectral data if they are spectrally and radiometrically characterized, and the automatic image adjustment is turned off. These cameras only have a very limited number of rather wide spectral bands, but have a very high spatial resolution at comparatively low cost. An example is the Canon s110 NIR, which records green (560 nm, FWHM: 50 nm), red (625 nm, FWHM: 90 nm), and near-infrared (850 nm, FWHM: 100 nm) bands at 3000 by 4000 pixels. For some cameras, firmware enhancements such as the Canon Hack Development Kit (CHDK Community, 2017) are available and provide additional functionality beyond the native camera firmware, which is potentially useful for remote sensing activities. Berra et al. [92] demonstrated how to characterize such cameras and compared the results of multi-temporal flights to map phenology to Landsat 8 data.
While the main advantage of RGB and CIR cameras is their high spatial resolution and comparatively low cost, a main limitation is the overlap between spectral bands. Additionally, these bands also do not comply with the bands originally used in standard vegetation indices (VIs). Thus, the pseudo-normalized difference vegetation index (pseudo-NDVI) is calculated, where the available bands are used in some combination. However, these cameras were not intentionally built for precise radiometric measurements, and thus might lack stability. Thus, one needs to be careful with the settings, since image-processing procedures within the camera might alter the information captured by the camera. The best way to use consumer-grade cameras is to store images in a raw data format (e.g., open DNG; [93]). In addition, some consumer-grade systems only offer a low radiometric resolution of eight bits per channel, which might be unable to resolve subtle radiometric differences.

3. Integration of Sensors and Geometric Processing

Accurate geometric processing is a crucial task in the data processing workflow for UAV datasets. Fundamental steps include the determination of the sensor interior characteristics of the sensor system (interior orientation), the exterior orientation of the data sequence (position and rotation of the sensor during the data capture), and the object geometric model to find the geometric relationship between the object and the recorded radiance value.
Accurate position and orientation information is required to compute the location of each pixel on the ground. Full-size airborne hyperspectral sensors follow the pushbroom design, and some can use a survey-grade differential GNSS receiver (typically multi-constellation and dual frequency capabilities) and IMU to determine the position and orientation (pitch, roll, and heading) of the image lines. When post-processed against a GNSS base station established over a survey mark at a short baseline (within 5–10 km), a positioning accuracy of 1–4 cm can be achieved for the on-board GNSS antenna, after which propagation of all of the pose-related errors typically results in 5–10 cm direct georeferencing accuracy of UAS image data. This can be further improved by using ground control points (GCPs) measured with a differential GNSS rover on the ground or a total station survey [94,95,96,97]. One of the challenges in hyperspectral data collection from UAVs is the limitation in weight and size of the total sensor payload. Survey-grade GNSS and IMU sensors tend to be relatively heavy, bulky, and expensive, e.g., fiber optic gyro (FOG) IMUs providing absolute accuracy in orientation of <0.05° [98]. Development in microelectromechanical systems (MEMS) has resulted in small and lightweight IMUs suitable for UAV applications; however, traditionally, the absolute accuracy of these MEMS IMUs has been relatively poor (e.g., typically ~1° absolute accuracy in pitch, roll, and yaw) [98]. The impact of a 1° error at a flying height of 50 m above ground level (AGL) is a 0.87-m geometric offset for a pixel on the ground. If we consider the key benefit of UAV remote sensing to be the ability to collect sub-decimeter resolution imagery, then such a large error is potentially unacceptable. The combined error in pitch, roll, heading, and position can make this even worse. There is an important requirement for the optimal combination of sensors to determine accurate position and orientation (pose) of the spectral sensor during acquisition (which also requires accurate time synchronization), or an appropriate geometric processing strategy based on image matching and ground control points (GCPs).

3.1. Georeferencing of Point Spectrometer Data

While point spectrometers offer high spectral resolution, their data contain no spatial reference. Thus, precise positioning and orientation information as well as an accurate digital surface model are necessary to project the measurement points on the surface. One approach is to use precise GNSS/IMU equipment, which is still expensive. An alternative is to align and capture data simultaneously with a 2D imager, such as for example with an either monochrome or RGB machine vision camera (e.g., [25]). In a next step, computer vision algorithms such as SfM can then be used to derive the orientation and position of the images and the associated point spectrometer measurements, assuming that the images contain sufficient features and are captured with enough overlap [99,100]. While the second approach is cheaper than the first, both add additional payload to be carried by the sensing system. For both approaches, accurate time synchronization between the spectroradiometer and the GNSS/IMU or camera is required. Furthermore, each spectroradiometer has a certain FOV determined by the slit or foreoptic, which can be constrained with additional accessories, such as a Gershun tube or collimating lens. Finally, the integration time of the spectroradiometer will also have an impact on the size of the footprint. For a spectroradiometer with a relatively long integration time (e.g., 1 s) on a moving UAV platform, the circular footprint will be ‘dragged out’ into an elongated shape. Additionally, in off-nadir measurements, the circular footprint also elongates to an elliptical shape [24]. The combined effects of the position, orientation, FOV, integration time of the spectroradiometer, flying height and speed of the UAV, and the surface topography will determine the location and size/shape of the spectral footprint. Finally, one should also consider that measurements of a field spectrometer are center weighted within their FOV, and the configuration of the fiber and fore optic might influence the measured signal [101].

3.2. Georeferencing of Pushbroom Scanner Data

Pushbroom sensors need to move to build up a spatial image of a scene. Typically, these sensors collect 20–100 frames per second (depending on integration time and camera specifications). The slit width, lens focal length, and integration time determine the spatial resolution of the pixels in the along-track direction (i.e., flight direction). The number of pixels on the sensor array (i.e., number of columns) and the focal length of the sensor determine the spatial resolution of the pixels in the across-track direction. To accurately map the spatial location of each pixel in the scene, several parameters need to be provided or determined: camera/lens distortion parameters, sensor location (XYZ), sensor absolute orientation (pitch, roll, and heading), and surface model of the terrain. Pushbroom sensors are particularly sensitive to flight dynamics in pitch, roll, and heading, which makes it challenging to perform a robust geometric correction or orthorectification. For dynamic UAV airframes, such as multi-rotors, this is particularly challenging.
Lucieer et al. [38] and Malenovský et al. [39] developed and used an early hyperspectral multi-rotor prototype in Antarctica that did not use GNSS/IMU observations, but rather relied on a dense network of GCPs for geometric rectification based on triangulation/rubber-sheeting. This prototype was later upgraded to include synchronized GNSS/IMU data (Figure 3) in order to enable orthorectification using the PARGE geometric rectification software [102,103].
With the use of a limited number of GCPs and/or on-board GNSS coordinates, machine vision imagery can be used to determine the position and orientation of a hyperspectral sensor without the need for complex and expensive GNSS/IMU sensors. The main advantage of machine vision imagery is that it can be used in rigorous photogrammetric modeling [104], SfM [105], or simultaneous localization and mapping (SLAM) [106] workflows to extract 3D terrain information and pose information simultaneously. Suomalainen et al. [40] developed a hyperspectral pushbroom system with a synchronized GNSS/IMU unit for orthorectification; they used a photogrammetric approach based on SfM to improve the accuracy of the on-board navigation-grade GNSS receiver and derive a digital surface model for orthorectification. Habib et al. [107] and Ramirez-Paredes et al. [108] presented approaches for the georectification of hyperspectral pushbroom imagery that were purely based on imagery and image matching. Their approaches are attractive, as a fully image-based approach reduces the complexity of sensor integration on board the UAV. However, in order to achieve a high absolute accuracy, accurate GCP measurements still need to be obtained, or an accurate on-board GNSS needs to be employed. In addition, to match the frame rate of a hyperspectral sensor, a lot of machine vision data will have to be stored and processed (potentially thousands of images per flight).
Recently, sensor manufacturers have started to produce turnkey hyperspectral pushbroom sensor packages that include the imaging spectrometer, data logging unit, and GNSS/IMU sensors in a small and lightweight package, e.g., Headwall Photonics nano-Hyperspec. One of the major issues with complete packages such as these is the quality of the GNSS and IMU data. The nano-Hyperspec for example carries a GNSS/IMU with a navigation-grade GNSS receiver delivering an absolute accuracy of 5–10 m. In addition, the IMU can measure yaw, but to derive an absolute heading from yaw requires an absolute baseline measurement, which is usually derived from the GNSS flight path and/or a 3D magnetometer. The heading derived from the flight path will provide the general flight direction; however, the UAV airframe can have a completely different absolute heading (i.e., a multi-rotor can have a yaw direction that is different from the flight direction). These heading measurements are notoriously inaccurate, which can result in major georectification errors. A dual antenna tightly coupled GNSS/IMU solution can overcome these issues, but they tend to be heavier and more expensive. Two GNSS antennae at a relatively short baseline, e.g., ~1 m, can offer an absolute heading accuracy of 0.1°. Machine vision data can be used to assist pose estimation and facilitate more accurate georectification through feature image matching and co-registration [103].

3.3. Georeferencing of 2D Imager Data

3.3.1. Snapshot 2D Imagers

The major advantage of snapshot 2D imagers is that the spatial patterns in each image frame can be used in an SfM workflow. Through the selection of an optimal spectral band or the use of the raw 2D hyperspectral mosaic, the SfM process allows for the extraction and matching of image features. The resulting bundle adjustment will then calculate the position and orientation for each image frame without the need for GNSS/IMU sensors (although the image-matching phase can be assisted with GNSS/IMU observations), which reduces the complexity of the setup (Figure 4). Since this approach derives the relative position and orientation of the images, a scene with relative scaling can be generated. For several applications, this is already sufficient, and the approach is appealing, since one can forgo the additional weight and complexity of a GNSS/INS approach. Still, with the aid of an accurate on-board GNSS receiver or GCPs, geometrically accurate orthomosaics can be created (with a typical absolute accuracy of 1–2 pixels). One of the issues with this approach is that 2D imagers tend to have a lower spatial resolution, which can affect the number of matching features found in the SfM process. This can result in poor performance in image matching in complex terrain/vegetation, which has a direct impact on the quality of the spectral orthomosaic. This can be compensated by merging the low-resolution hyperspectral information with, e.g., a higher resolution panchromatic image [43]. An additional benefit of 2D imagers is that an initial bundle adjustment can be followed by an optional dense matching approach, which then allows the generation of high-resolution 3D hyperspectral point clouds and surface models (c.f. Section 5.4).

3.3.2. Georeferencing of Sequential and Multi-Camera 2D Imagers

The multispectral and hyperspectral sensors based on multiple cameras or tunable filters produce non-aligned spectral bands. The straightforward approach would be to determine the exterior orientations of each band individually using SfM. If the number of bands is large, for example 20 or 100, the separate orientation of each band can result in a significant computational challenge, and therefore, solutions based on image registration are more feasible [109]. The transformation can be two-dimensional (such as rigid body, Helmert, affine, polynomial, or projective) or three-dimensional, based on the collinearity model and accounting for the object’s 3D structure, i.e., the orthorectification [110].
Jhan et al. [111] presented an approach utilizing the relative calibration information and projective transformations for the Mini-MCA lightweight camera, which is composed of six individual, rigidly assembled cameras. They used a laboratory calibration process to determine the relative orientations of individual cameras with respect to the master camera in the multi-camera system; the relative orientations of the master camera (red band) and an additional RGB camera were determined. The RGB camera was oriented with bundle-block adjustment, and the exterior orientations of the rest of the bands were calculated based on the relative orientations and the exterior orientations of the reference camera. An accuracy of 0.33 pixels was reported in the registered images. Several researchers reported accuracies on the level of approximately two pixels when using approaches based on 2D transformations with the mini-MCA camera [112,113,114].
In the cases of tunable filters such as the Rikola camera, each band has a unique exterior orientation. Honkavaara et al. [67] showed that the geometric challenges increase with the decreasing flight height and increasing flight speed, time difference between the bands, and height differences among the objects. In several studies, good results have been reported when using 2D image transformations in flat environments [44,115]. If the object has great height differences, such as forest, rugged terrain, and built areas, the 2D image transformations do not give accurate solutions in general cases; however, good results were reported also in a rugged environment when using 2D image transformations if combined with image capture based on stopping while taking each hypercube [68]. Image registration based on physical exterior orientation parameters and orthorectification should be used when operating these tunable filter sensors from mobile platforms in environments where the object of interest has significant height differences. Honkavaara et al. [67] developed a rigorous and efficient approach to calculate co-registered orthophoto mosaics of tunable filter images. The process include the determination of orientations of three to five reference bands using SfM, subsequent matching of the unoriented bands to the reference bands, calculation of their exterior orientations, and the orthorectification of all of the bands. Registration errors of less than a pixel were obtained in forested environments. The authors emphasized the need for proper block design in order to achieve the desired precision.

4. Radiometric Processing Workflow

4.1. General Procedure for Generating Reflectance Maps from UAVs

Radiometric processing transforms the readings of a sensor into useful data. First, sensor-related radiometric and spectral calibration needs to be carried out. Second, transformation to top-of-canopy reflectance based on radiometric reference panels and/or an empirical line method (ELM), secondary reference devices, or atmospheric modeling needs to be carried out. Third, influences of the object reflectance anisotropy (bidirectional reflectance distribution function, BRDF) effects, and shadows can be normalized. Schott [116] calls this entire multi-step process the image chain approach. The different calibration schemes are outlined in Figure 5. These steps can be carried out sequentially as independent steps, which has been the typical approach in the classical approaches used for the airborne and spaceborne applications, for example, as implemented in the ATmospheric CORrection (ATCOR) [102]. UAVs also provide some novel aspects to be considered.
The desired output is usually reflectance. However, in typical situations, the output is strictly speaking the hemispherical conical reflectance factor (HCRF; [117,118]), because the IFOVs of each pixel captures a conical beam. The pixels of imaging spectrometers have a relatively small IFOV; therefore, their measurements can be considered an approximation of hemispherical directional reflectance factors (HDRF; [42,119]). Finally, multi-angular measurements across large parts of the hemisphere can be used to approximate the BRDF of a surface (e.g., [24]). Besides, if measurements from the hemisphere are averaged, the bihemispherical reflectance, which is also called albedo (blue sky albedo in the MODIS product suite; [118]), can be approximated. Consequently, the albedo is also approximated if the information of pixels with a wide range of different viewing geometries (e.g., from multiple images) are averaged, as is often done during orthomosaic generation from 2D images [42]. In every case, the data needs to be calibrated. In the following subsections, the sensor calibration (Section 4.2) and the image data calibration (Section 4.3) processes are described in detail.

4.2. Sensor-Related Calibration

Radiometric sensor calibration determines the radiometric response of an individual sensor [120,121,122]. The calibration process includes several phases: a relative radiometric calibration, which aims for a uniform output across the pixels and time, a spectral calibration, which determines the spectral response of the bands, and an absolute radiometric calibration, which determines the transformation from pixel values to the physical unit radiance. A comprehensive review on calibration procedures for high-resolution radiance measurements can be found in Jablonski et al., [123] and Yoon and Kacker [124]. In the following, we will focus on sensor calibration procedures that are required to generate reflectance maps from spectral UAV data. These procedures will provide sensor-specific calibration factors that are applied to the captured spectrometric datasets. While the calibration steps are essentially the same for point, line, and 2D imagers, the complexity increases with data dimension, since every pixel needs to be characterized and calibrated. In many cases, researchers have implemented their own calibration procedures for small spectrometers or spectral imagers used in UAVs, since either the systems have been experimental setups, or the sensor manufacturers of small-format sensors have not provided calibration files or suitable calibration procedures. The examples in this section are taken from studies with the point spectrometer Ocean Optics STS-VIS [22] and USB 4000 [25], the pushbroom system Headwall Photonics Micro-Hyperspec [38,125], other custom-made pushbroom sensors [40,126], 2D imagers Cubert Firefleye [43,127,128,129], Rikola FPI [68], and Tetracam MCA and mini-MCA models [50,130,131].

4.2.1. Relative Radiometric Calibration

Relative radiometric calibration transforms the output of the sensor to normalized DNs (DNn), which have a uniform response over the entire image during the time of operation [121]. This transformation includes dark signal correction and photo response and optical path non-uniformity normalization.
The dark signal noise mainly consists of the read out noise and thermal noise, which are related to sensor temperature and integration time [121] and is corrected by estimating the dark signal non-uniformity (DSNU). Practical approaches for DSNU compensation are the thermal characterization of the DSNU in the laboratory at multiple integration times, the correction based on continuous measurement of dark current during operation utilizing so-called “black pixels” within the sensor, or taking closed shutter images. When no dark pixels are available but temperature readings are, the DSNU can be characterized at multiple temperatures and integration times [22,126]. For sensors where neither dark pixels nor temperature readings are available, the DSNU might be estimated by taking pictures with blocking the lens under the same conditions as during the image capture [40,43,68]. Preferably, this should be combined with an analysis of the DSNU variability during operation [127] or with integration time [50].
The optical path of a camera alters the incoming radiant flux (vignetting, c.f. [132,133]), and different pixels transform it non-uniformly to an electric signal. To normalize these effects, modeling the optical pathway or image-based techniques can be performed. For the latter approach, both a simpler and more accurate approach [134], a uniform target such as an integration sphere or homogeneously illuminated Lambertian surface is measured, and a look-up-table (LUT) or sensor model is created for every pixel. Suomalainen et al. [40] and Lucieer et al. [38] performed non-uniformity normalization for their pushbroom systems by taking a series of images of a large integrating sphere illuminated with a quartz-tungsten-halogen lamp. Kelcey and Lucieer [50] and Nocerino et al. [135] determined a per-pixel correction factor look-up-table (LUT) using a uniform, spectrally homogeneous, Lambertian flat field surface for the mini-MCA and the MAIA multispectral cameras, respectively. Aasen et al. [127], Büttner and Röser [126], and Yang et al. [129] used an integrating sphere to perform the non-uniformity normalization and determined the sensor’s linear response range by measuring at different integration times. Aasen et al. [43] and Yang et al. [129] determined the vignetting correction with a Lambertian panel in the field. Khanna et al. [86] presented a simplified approach to the non-uniformity and photo response normalization by using computer vision techniques.
Finally, while dark signal correction is relatively easy as long as a sensor provides temperature readings, the vignetting correction might be challenging in practice. Large integration spheres are expensive, and small spheres might not provide homogeneous illumination across the sensor’s FOV. On the other hand, when using a Lambertian surface, such as a radiometric reference panel, it is challenging to illuminate the whole target homogenously.

4.2.2. Spectral Calibration

Spectral response gives the system’s radiometric response as a function of wavelength for each band and spatial pixel [120,123,136,137]. Monochromators or line emission lamps are usually used for determining the spectral calibration. Either complete measured spectral response or some functional form is used in calculations; typically, a Gaussian with the central wavelength and the full width of half maximum (FWHM) is used as the model spectral response. In addition, the smile effect, which causes a shift of the central wavelength as the function of the position of the pixel in the focal plane, and the keystone, which causes a bending of spatial lines across the spectral axis [123] needs to be corrected. Thus, the spectral response must be measured in the spectral as well as in the spatial detector dimension. The smile and keystone characterization are necessary, in particular with hyperspectral sensors [123,137].
The information of the spectral response functions of the lightweight spectral sensors is still rather limited. To date, mostly monochromators [22,128,129] or HgAr, Xe, and Ne gas emission lamps [38,40,126] have been used for spectral calibration. Garzonio et al. [25] characterized their Ocean optics USB 4000 point spectrometer to measure fluorescence that requires a rigorous calibration. They also used emission lamps while also integrating vibration tests that simulated real flight situations, and found that the system had a good spectral stability.
A different approach can use Fraunhofer and absorption lines of the atmosphere for spectral calibration [138], if the spectral resolution of the sensor is good enough. Busetto et al. [139] published software that estimates the spectral shift of a given data set to moderate resolution atmospheric transmission (MODTRAN) simulations [140]. This approach is particularly important, since it can be used during the flight campaign, and the spectral performance can be different in laboratory and actual flight environments.

4.2.3. Absolute Radiometric Calibration

Absolute radiometric calibration determines the coefficients for the transformation between DN and the physical unit radiance for each spectral band [W m−2 sr−1 nm−1]. Typically, a linear model with gain and offset parameters is appropriate [122,141]. Two procedures have been published to accomplish this. The first approach uses a radiometrically calibrated integrating sphere. Büttner and Röser [126] used a sphere equipped with an optometer for measuring the total radiance that is regularly calibrated against German national standard (PTB), and stated that calibration was valid for the spectral range from 380 nm to 1100 nm with a relative uncertainty of 5%. The second approach is to cross-calibrate a new device with an already radiometrically calibrated device. Burkart et al. [22] cross-calibrated their Ocean Optics STS point spectrometer with a radiometrically calibrated ASD FieldSpec Pro 4 by aligning the FOVs of both devices such that they pointed on almost the same area on a white reference panel. Calibration coefficients were obtained by comparing several spectra that were collected at different solar zenith angles to provide measurements covering different light levels and a linear relationship between ASD radiance values and STS digital counts at different light levels, which were normalized for different instrument integration times. A similar approach was performed by Del Pozo [131] for the 2D imager Tetracam mini-MCA. Both procedures require that the spectral response function of the sensors is known, since the spectral bands of the reference and the sensor need to be convolved to a common level to derive the band-specific calibration coefficients.
The absolute calibration is often a challenging process, because the source must be traceable to radiance standards. The next section shows that absolute radiometric calibration can be omitted in cases where only reflectance, is needed and radiance is not. Besides, many factors influence the system radiometric response, such as for example, the shutter, stray light effects, impacts of temperature, and pressure [123,142]. For applications that require very precise data, such as solar induced fluorescence estimation, these parameters also need to be considered.

4.3. Scene Reflectance Generation

The instrument recordings can be transformed to reflectance either by using information of the incident irradiance or utilizing reflectance reference targets on the ground and an empirical line method (ELM).

4.3.1. Reflectance Generation Based on Incident Irradiance

The spectrometer radiances can be transformed into reflectance with the aid of incident irradiance measurements. The incident irradiance can be either estimated using atmospheric radiative transfer models (ARTMs) or measured using an irradiance spectrometer.
ARTMs allow simulating incoming irradiance from the sun to top of the canopy as well as the influence of the atmosphere on the signal during its way from the canopy to the sensor. Input parameters include the time, date, location, temperature, humidity, and aerosol optical depth measured by, e.g., a sun photometer. ARTMs can be used to generate the irradiance necessary to calculate reflectance together with the radiance received by the sensor. One example is shown in Zarco-Tejada et al. [125], which used the SMARTS model [143], parameterized with the aerosol optical depth measured at 550 nm with a Micro-Tops II sun photometer (Solar LIGHT Co., Philadelphia, PA, USA) collected in the study areas at the time of the flight, for hyperspectral pushbroom imagery at 575 m above ground level. The drawback of ARTMs for reflectance calculations is the need for sufficient parameterization of the atmosphere. This is particularly challenging for flights over larger areas, where the atmosphere might be heterogeneous, and under varying illumination conditions due to clouds.
Due to the challenges with the ARTMs, the technologies measuring the incident irradiance using a secondary spectrometer are of great interest in the UAV spectrometry. The possible methods include using stationary irradiance or radiance recordings (e.g., of a reference panel or with a cosine receptor on the ground) or a mobile irradiance sensor equipped with cosine receptor optics mounted on the UAV.
Burkart et al. [22] used two Ocean Optics STS-VIS cross-calibrated point spectrometers. One of the spectrometers was measuring radiance reflected from the object on-board a multi-rotor UAV, and the second spectrometer measured the Spectralon panel on ground. This method is also referred to as a “continuous panel method”, and is similar to setups of classical dual ground spectrometer measurements, which provide reflectance factors by taking consecutive measurements of the target and Lambertian reference panel (e.g., [144]). Burkhart et al. [27] used a dual-spectrometer approach with two TriOS RAMSES point spectrometers. They calculated the relative sensitivity of the radiance and irradiance sensors and fitted a third-order polynomial to this ratio. They transformed the radiance measurements of the downwards-facing device to reflectance utilizing the simultaneous irradiance measurements of one upward-facing spectrometer equipped with a cosine receptor on-board the UAV. Lately, also consumer-grade multispectral sensors such as the Parrot Sequoia and Maia are shipped with an irradiance sensor [51,53].
An upward-looking sensor equipped with a cosine receptor foreoptic is required to measure the hemispherical irradiance. In reality, the angular response of the cosine receptor deviates from a cosine shape (c.f. Sections 3.3.7 and 4 in [145] and [146]). The cosine error correction depends on the atmospheric state when the measurements were made, and the deviations typically become larger as the incidence angle increases, implying that measured irradiance is underestimated compared with an instrument with a perfect angular response. This is particularly important during times of low sun elevation (e.g., in high latitudes, and in the morning and evening). The underestimation may be corrected for, providing that the sky conditions during the measurements are known, and that the angular response of the instrument is known. Bais et al. [147] reported a deviation from a perfect cosine response of less than 2%.
A further requirement is that the upward-looking detector must be properly leveled to allow accurate measurements of the downwelling irradiance. LibRadtran radiative transfer modeling [148,149] with a solar zenith angle of 55.66° with a 10° zenith angle showed approximately 20% differences depending on whether the sensor was facing toward or away from the sun [27]. When flying under cloud cover, the influence is weaker [150]. Examples of the impact of illumination changes on broadband irradiance measurements when using an irradiance sensor fixed on the UAV frame during eight flights carried out under sunny, partially cloudy, and cloudy conditions were presented by Nevalainen et al. [72]. In sunny conditions, the tilting of the sensor toward and away from the sun caused manifold impacts in the irradiance recordings.
Both the stationary and mobile approaches allow the illumination changes to be determined during the data capture. However, the mobile solution allows the irradiance changes to be tracked at the measurement place, which has benefits in case of non-homogeneous sky conditions (still one needs to regard that due to the oblique illumination from the sun, the ground could receive a different amount of energy to the reference device on the UAV). Ideally, the second spectrometer (and eventually also including the reflectance of the radiometric reference panel) is radiometrically and spectrally cross-calibrated with the primary spectrometer. Often, this is done simultaneously, cross-calibrating both devices to a third radiometrically calibrated device [22,27,151].

4.3.2. Empirical Line Method (ELM)

The ELM is a commonly used image calibration method in which a set of radiometric reference panels with a known spectral reflectance are used to calculate reflectance factors (HDRFs). After the relative or absolute radiometric calibration of images, a line is fitted with the least-squares method between the image DNs and the measured target reflectance factors [152]. The sensor may be radiometrically calibrated, or, if not, then the method combines the linear conversions from the DNs to the reflectance factor into a single linear transformation. Implementations of the method can be found in software packages such as ENVI (Exelis Visual Information Solutions, Bolder, CO, USA).
Several researchers have applied the ELM for their UAV operations (e.g., [129,130]). Lucieer et al. [38] used five near-Lambertian gray panels with 5% to 70% reflectance built with a special paint that provided a reasonably flat spectral response. Yang et al. [129] used five artificial near-Lambertian tarps placed on flat ground for the ELM. Wang and Myint [153] described the ELM for transforming raw RGB images to reflectance based on nine characterized gray panels with different intensities (although they did not perform relative radiometric correction as would be recommended). Additionally, the ELM has been used with (modified) RGB consumer-grade cameras [154,155]. Several UAV studies have also used a simplified ELM with only one panel; the reflectance factors are calculated by rationing the target and the white reference measurements [43,129,144]. However, Aasen and Bolten [42] found some issues when using this simplified ELM for UAVs. When placing the sensor and the UAV above the panel, a large part of the hemisphere is (invisibly) shaded. This might introduce a severe wavelength-dependent bias to the measurements that also affects the retrieval of vegetation parameters. The bias is strongest under cloudy conditions, and can account for up to 15% [42]. Thus, we recommend not using the simplified ELM for UAV research when the UAV is placed above the panel at a short distance.
The ELM is simple and accurate if all of the assumptions of the method are met. When ELM is used in its simple form, many factors can deteriorate the accuracy, such as for example, variations in atmospheric conditions over the area of interest, topographic variations, and atmospheric BRDF. A minimum of two reference targets covering the range of reflectance values of interest should be used; typically, the range is 0–50% for vegetation. Adding more than two targets reduces uncertainties, enables an assessment of sensor linearity, and allows evaluation of the ELM results by using some panels only for verification. The calibration targets should be flat and leveled, without obstructions, and should be large enough (preferably more than five times the image GSD) to reduce adjacency effects by only selecting the middle part of the panel. Targets should have uniform intensity and be close to Lambertian reflectance characteristics. If it is not possible to deploy targets, reflectance of ground objects with an appropriate reflectance range and Lambertian reflectance can be measured and used, for example, gravel, sand, or asphalt surfaces. The assumption of Lambertian reflectance can be released if the reflectance anisotropy of the target is characterized and considered [156,157].
Miura and Huete [158] stated that the ELM was suitable for flight times shorter than 30 min under stable weather conditions (clear sky) when the results of the ELM of panel measurements at the beginning and end of a flight were linearly interpolated. However, the main disadvantage is that the ELM cannot adapt for illumination changes during the flight, since the panels are not within every image. Thus, the ELM alone is not useful under variable conditions.

4.3.3. Atmospheric Correction

Section 4.3.1 described how ARTMs can be used to simulate the irradiance to calculate reflectance. Additionally, ARTMs are widely used for correcting multispectral and hyperspectral imagery satellite and airborne data [102,159] and high-altitude UAV images [125,126] for atmospheric influence on the path from the object to the sensor. Recent modeling studies suggest that atmospheric correction is important for very precise radiometric measurements, e.g., to estimate solar-induced chlorophyll fluorescence [160]. For reflectance studies, a detailed analysis is still missing. Thus, atmospheric influences should be considered in each application [27,131]. An easy approach to normalize the influence of the atmosphere is to use the ELM.

4.4. Scene Reflectance Correction

4.4.1. BRDF Correction

When analyzing imaging data measured with wide-angle FOVs, the anisotropy of the surface might cause significant radiometric differences within individual images and between neighboring images [42,161,162,163]. Wide-angle FOVs are very common in UAV imaging spectroscopy to facilitate a larger spatial coverage. This introduces unwanted effects when mosaicing the images, and affects the spectral signature of objects within the scene [42,161]. BRDF correction is defined as the process of compensating the influence of anisotropy, so that the image reflectance values correspond to the reflectance factor at the (mostly) nadir direction. The commonly used BRDF-models can be classified as physical, empirical, or semi-empirical [164,165]. In classical remote sensing, a BRDF correction is usually carried out by means of empirical models [102,119,159,164,166]. The BRDF correction is calculated by determining the BRDF model of the target of interest, and then calculating a multiplicative correction factor for BRDF compensation [159,164]. It may also include calculation and normalization to reflectance factors for a desired geometry [118,167].
The BRDF correction using the simple empirical model by Walthall et al. [168] and Nilson and Kusk [169] has been used by Beisl [159,164] and by Honkavaara et al. [44,65,163] in the radiometric block adjustment approach. Different statistical methods are also popular in correcting BRDF effects. Laliberte et al. [112] used the dodging method to compensate for the uneven lighting conditions across a photo frame due to the BRDF effects, fragmented cloud cover, vignetting, and other factors. The process is based on global statistics calculations for a group of images, to balance the radiometry both within individual images and across groups of imagery. Their results showed less than 2% residual root mean square errors (RMSEs) (in reflectance) when calculating the linear fit of the reflectance mosaic and reference reflectance.

4.4.2. Topographic Correction

The topography can have a large influence on the local illumination within an image [170]. The radiance of the same material varies if it is located on a slope oriented toward or away from the sunlight incidence. For a correction, a DSM and the Sun’s elevation and azimuth angles at the time of acquisition are needed. Several correction methods exist [170]. Jakob et al. [68] implemented and tested some of the common topographic correction methods with UAV-based imagery for geological applications. The methods comprised Lambertian, such as the cosine method [171], gamma method [172] or percent method, as well as non-Lambertian methods, such as the Minnaert method [173] or the c-factor method by Teillet et al. [171]. They recommended the c-factor method.

4.4.3. Shadow Correction

Shadows are caused by 3D objects within the scene and created by clouds. The approaches for treating shadowed areas include de-shadowing and separating the analysis of the shadowed and sun-illuminated areas. Adeline et al. [174] categorized shadow detection methods into six classes: histogram thresholding, invariant color models, object segmentation, geometrical methods, physics-based methods, and unsupervised and supervised machine learning methods. The geometric method requires the object 3D model and the information of the solar elevation and direction to calculate the positions of shadows. Due to various uncertainties, the accuracy of the geometric method is not sufficient in most cases, especially with high resolution images [175]. Therefore, image-based methods are needed. Adeline et al. [174] used simulated data to obtain accurate reference shadow masks. In these experiments, histogram thresholding on RGB and NIR channels performed the best, followed by physics-based methods. De-shadowing based on physical radiation modeling relies on all of the areas in shadows being illuminated by diffuse irradiance only; the shadow correction provided good results for hyperspectral airborne images and satellite images [176] and the ADS high-resolution photogrammetric multispectral scanner [175]. To the authors’ knowledge, no such studies exist for high-resolution UAV approaches; thus, further studies are needed in this field.

4.5. Radiometric Block Adjustment

Radiometric block adjustment can be used in cases when the area of interest is covered by multiple overlapping images, such as image blocks captured with 2D or pushbroom imaging sensors. In the photogrammetric (geometric) processing of image blocks, the block adjustment is used to determine the best geometric fit over the entire image block. Radiometric block adjustment is based on a similar idea. The approach is to model the radiometric imaging process, i.e., the model between the object reflectance and the image DN, and then solve the parameters of this model using optimization techniques utilizing redundant information from the multiple overlapping images [44,163]. The outputs of the process are the parameters of the radiometric model, which can be used in the following processes to produce radiometrically corrected image products, such as reflectance mosaics, reflectance point clouds, or reflectance observations of objects of interest [44,163]. Similar approaches have previously been used with aircraft images [177,178,179,180].
The model between a DN and reflectance by [44] accounts for the variability of the radiance measurement and the BRDF effects, and determines the absolute transformation from DN to reflectance using the ELM. In the adjustment process, a set of radiometric tie points are determined, observation equations are formed utilizing the DN observations of each radiometric tie point in multiple images. In addition to the radiometric tie points, other observations can also be included. In the current implementation, radiometric control points (e.g., reflectance panels, c.f. Section 4.4.3) and the a priori values of relative differences in the irradiance of different images can be included as observations [163]. Relevant model parameters are selected for each adjustment task. For example, during overcast conditions, it is not necessary to use the BRDF parameters, whereas under stable conditions, the relative correction parameters are not usually necessary. Furthermore, a comprehensive weighting strategy is used to reach the optimal results in the combined adjustment mode [163]. The definition for the reflectance outputs calculated as a result of this procedure is the hemispherical directional reflectance factor (HDRF). Many of the steps in Figure 5 are thus integrated into the radiometric block adjustment process.

5. Discussion and Best Practice

5.1. Sensors

During the last 10 years, the number of (commercial) sensors tailored for UAV sensing systems has rapidly increased. Today, sensors are able to capture data faster and with much higher spatial resolution, which allows flying higher and faster, and covering a much larger area. While most of the UAV sensors still cannot compete with their bigger counterparts carried on airplanes such as the CASI [181], airborne prism experiment (APEX; [182]), HyPlant [183], the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS; [184]) or NASA Goddard’s LiDAR, Hyperspectral, and Thermal Airborne Imager (G-LiHT; [185]), UAV point and pushbroom sensing systems, in particular, have been demonstrated to fill a unique niche for a large variety of applications and research purposes. We expect this trend to continue, since manufacturers of professional airborne sensors such as HySpex and Specim have now also started to build UAV sensors [41,186]. At the same time, UAV remote sensing has grown to form its own discipline with research particularly directed to investigating and improving the quality of small and lightweight sensors [22,50] and further developing data processing algorithms to fit the ultra-high resolution data, including quality assurance approaches [43,68]. Moreover, innovative approaches empowered by the new technology are developed that go beyond the classical capabilities of remote sensing platforms, such as rapid BRDF quantification [24,70,161,187] and simultaneous spectral and 3D mapping [43,44,60,72].
Table 2 reflects these developments by summarizing key publications on novel sensors, concepts, or methods for calibration, integration, or data pre-processing for UAV spectral sensors and data. Furthermore, the interested reader is referred to [15] for a comprehensive list of spectral imaging sensors that extend the examples in this manuscript. Due to the variety of sensors and their configuration, the classical discrimination between hyperspectral and multispectral is becoming blurred. Thus, every study published should contain information on the specific band configuration. Since so many spectral sensors with different configurations have appeared, it becomes hard to compare the results between different studies. Making the sensor configuration transparent is the very first step in addressing this issue. However, a prerequisite for such information is a comprehensive characterization and calibration of the sensing system. While the interested reader is referred to the calibration studies in Table 2 and Jablonski et al. [123], we see the main responsibility as being with the camera manufacturers. At the same time, this also includes the correct usage of common terminology (e.g., spectral sampling interval versus FWHM).
It is important to note that there is most likely no sensor that is able to meet all needs. Additionally, when selecting a sensor, users are usually confronted with the spatial resolution versus spectral resolution versus coverage challenge. Generally, a higher spatial resolution leads to a lower spectral resolution, due to physical constraints in sensor design. Additionally, if larger areas should be covered, this is mostly achieved by flying higher, which will in turn lower the ground sampling distance.
We expect that more UAV sensors will become available in the near future and identify two trends. On one hand, there are the more complex and expensive cameras that are able to capture many bands or implement new techniques to capture spectral information (e.g., [31,188,189]) with even more lightweight and small sensors. These sensors allow researchers to conduct research on spectral sensing and identify promising bands for different applications. Another trend is toward more consumer-oriented cameras that are relatively easy to use and allow standardized tasks to be carried out, such as acquisition of NDVI imagery.

5.2. Geometric Processing

We argue that high absolute geometric accuracy is a requirement for hyperspectral data products acquired by UAVs. The ultra-high spatial resolution of UAV imagery is one of the key benefits for addressing new challenges in environmental and agricultural remote sensing applications. With most hyperspectral UAV sensors offering sub-decimeter spatial resolution, the absolute accuracy of the resulting data products should arguably be better than 10 cm to allow for accurate co-registration with other UAV datasets and the differentiation of small-scale features, such as sub-canopy elements (e.g., individual leaves, branches, stems, etc.). High absolute accuracy requires centimeter-level positioning capabilities on-board the UAV and/or centimeter-level accuracy of GCP coordinates. This requires a local GNSS base station and a differential multi-constellation (GPS, GLONASS, Galileo, Beidou) multi-frequency (L1/L2/L5) GNSS receiver on-board the UAV or for the collection of GCP coordinates. We argue that navigation-grade GNSS receivers, offering an absolute accuracy of 5–10 m, are inadequate for UAV applications. There are four approaches for georeferencing that can be used as a standalone technique or in combination (summarized in Table 3).

Ground control points (GCPs)

GCPs are traditionally used for a non-direct georeferencing solution, where a collection of clearly visible targets is distributed on the study site. A real-time kinematic (RTK) GNSS survey of the GCP markers (accurate to within 2–4 cm of absolute accuracy depending on the baseline distance to the base station) or total station survey (accurate to within 1 cm or better) is generally used to obtain accurate coordinates for the GCPs. The GCPs are identified in the imagery, and a geometric image transformation is performed preferably as part of the bundle adjustment through rigorous photogrammetric modeling or the typical SfM workflow. For 2D imagers and SfM orthomosaics, relatively few GCPs are needed. Typically, 5–13 GCPs are sufficient for most scenes, which are distributed along the outside of the scene, with at least one GCP in the center [94].
The advantage of this approach is that it does not require complex and expensive GNSS/IMU hardware on the UAV. The disadvantages are as follows. (i) The manual distribution of GCPs in the field and setup of field-based GNSS equipment can be time-consuming (especially for complex/dangerous environments). (ii) GCPs are unsuitable for pushbroom imagery, as GCPs do not account for pitch, roll, and yaw distortions induced by UAV flight dynamics. Correcting for these distortions requires a very dense network of GCPs, and rigorous correction is not possible. (iii) GCPs are unsuitable for a point-measuring spectroradiometer, as the GCPs are not visible in spectral readings.

On-board GNSS/IMU (direct georeferencing)

An on-board GNSS/IMU sensor synchronized with the spectral sensor in combination with a digital surface model (DSM) can provide sufficient data for a full orthorectification process based on on-board data alone, without the need for field work. This is the traditional approach for full-size airborne hyperspectral geolocation, which requires specialist hardware and software integration of the GNSS/IMU unit and the spectral sensor. For a point-measuring spectroradiometer, this is one of very few options for the geolocation of the spectral footprint.
The advantage of this approach is that it allows for rigorous georeferencing of a 2D, pushbroom, or point-based spectrometer without the need for GCPs. The disadvantages include (i) complex integration of a GNSS/IMU unit on-board the UAV due to the additional weight and cost, power regulation, and time synchronization of all of the sensors, and system calibration of the GNSS/IMU and sensor; (ii) post-processing of on-board GNSS data against a GNSS base station or communication/radio requirements for RTK data link for real-time corrections; and (iii) a dual antenna GNSS or high-grade IMU (e.g., FOG) is required for absolute heading determination, which is essential for multi-rotor UAVs.

Structure from Motion (SfM)

SfM can provide a simpler solution for on-board sensor integration. Several open source (e.g., MicMac, VisualSFM, PMVS/CMVS, OpenMVG) and commercial software packages (e.g., Pix4D, Agisoft Photoscan) exist to carry out the SfM process. Several authors have investigated the performance of these packages and compared them to each other for different applications (e.g., [94,191,192,193,194,195]). However, the development of algorithms and software is advancing fast, and the performance of the different solutions might change. The high geometric fidelity of the bundle adjustment in most SfM solutions means that the camera pose can be used to determine the position and orientation of the spectral sensor (provided that the SfM sensor and spectral sensor are accurately synchronized). To achieve a high absolute accuracy, either GCPs or on-board GNSS data are still required. SfM is the favored approach for 2D imagers, as it allows for a small and lightweight solution on-board the UAV.
The advantages of this approach are that a lightweight and small machine vision camera can replace a high-grade on-board GNSS/IMU. In addition, 3D point clouds and DSMs can be derived as part of the SfM process (spectral and structural data products from one flight). Finally, high relative accuracy of the 3D model and orthophoto can be derived through the SfM process. The disadvantages of the SfM approach include the requirement of substantial on-board storage capacity for high-rate machine vision data. In addition, the post-processing of SfM data is computationally demanding. SfM requires high overlap between flight strips, which influences flight planning (and limits the size of the area that can be covered in a single flight). Finally, the high absolute accuracy of an SfM solution still requires accurate GCPs or on-board GNSS data.

Co-registration

The co-registration approach is based on a reference or base RGB orthomosaic; the spectral images (pushbroom or 2D imager) are co-registered to it through image matching procedures (e.g., SIFT features, feature matching, RANSAC, transformation based on matching points). The RGB orthomosaic can be generated using any of the georeferencing approaches described above. The advantage of this approach is that it reduces the requirements for GNSS/IMU or the machine vision integration of the spectral sensor. The disadvantage is that co-registration is not a straightforward process, and requires identification of sufficient matching features. Sufficient spectral and spatial similarity has to exist between the RGB orthomosaic and the spectral imagery. Additionally, high absolute accuracy still requires a GCP survey or high-accuracy on-board GNSS on the SfM UAV.
Recent developments in relatively low-cost GNSS receiver boards (e.g., Tersus, Emlid, Piksi) allow for high-accuracy position solutions (<10 cm absolute accuracy) on-board UAVs for a fraction of the cost of earlier survey-grade GNSS receivers. This will open the way for low-cost lightweight direct georeferencing solutions at sub-decimeter accuracy in the future; however, rigorous testing needs to occur in order to determine the effects of single frequency carrier phase versus dual frequency GNSS processing for typical UAV flight dynamics.

5.3. Radiometric Processing

In good conditions, many radiometric calibration approaches are able to provide good quality reflectance data from the spectrometer observations. However, one of the benefits of UAVs that has been stated in the literature is the ability to fly below clouds to capture data (e.g., [44,196,197]). At the same time, a study by Hakala et al. [198] stated an influence on spectral UAV measurements of more than 100% due to “fluctuating levels of cloudiness”. Section 4 reviewed the currently available options for transforming the spectral information captured by a sensor to the at-object reflectance. Table 4 summarizes their suitability under different atmospheric conditions.
Major challenges hindering the utilization of the ARTM simulation for atmospheric correction are the uncertainties in the sensor absolute radiometric calibration and the inability to parameterize and model the atmospheric influence on the irradiance, in particular under unstable conditions. While a dual spectrometer approach requires cross-calibration of sensors, it can compensate illumination changes, but only if the secondary reference sensor and UAV sensor are close enough. Miura and Huete [158] found that under clear and cloud-free conditions, the approach with a stationary second spectrometer on the ground outperforms methods where a reference measurement is just taken before or after the flight (and then interpolated). A secondary properly stabilized device carried by the UAV allows illumination changes to be corrected for at the place of the measurement if the sun elevation is high enough to allow the cosine receptor to properly capture the illumination conditions. Although the implementation of this approach is more complex, it eases the flight operations, since no ground equipment is needed. We encourage manufacturers to build integrated dual spectrometer systems that provide accurate recordings in the dynamic conditions that are met in UAV remote sensing; challenges include rapid illumination changes, platform vibrations and movement, temperature effects, and others. For more information on the cross-calibration and reflectance factors retrieved with radiometric reference panels and multiple spectrometers, the interested reader is referred to the works of Anderson et al. [3,151,199,200]. Further challenges that remain unresolved with the dual spectrometer systems include the disturbances caused by the object reflectance anisotropy and shadows captured in a measurement (e.g., part of an image) but not by the irradiance sensor and object topography.
The ELM is an easy and straightforward approach for the radiometric correction of datasets under constant illumination conditions and if there is sufficient space to place the reference panels. ELM is particularly challenging in forest studies (e.g., [72,74]), since the panels may have to be deployed in small openings inside the forest. Here, the illumination conditions do not correspond to the conditions at the top-of-canopy due to the scattering and blocking of the direct or diffuse sky radiance of the surrounding canopy. In this case, the radiometric block adjustment can be beneficial for carrying the calibration obtained from panels placed in an open area to the area of interest, as long as the datasets are connected and preferably using an irradiance sensor on-board the UAV [72,74].
Radiometric block adjustment has been applied in studies with different settings to produce uniform image mosaics [163,177,178,179,180,201]. Principally, the method can compensate for illumination changes during the flight campaign based on the information contained in the images. Thus, no further equipment is needed on-board the UAV for the irradiance measurement, but the irradiance recordings can also be integrated to the same adjustment process. In several studies, the method has provided the best uniformity over the entire image dataset when compared with approaches based on ground irradiance spectra measurement and on-board irradiance measurement [44,201]. In Honkavaara et al. [44] and Hakala et al. [202], datasets were captured in illumination conditions varying from cloudy to sunny. The performances of approaches based on the irradiance radiometer on-board the UAV, irradiance spectrometer on the ground, and the radiometric block adjustment were compared, and the radiometric block adjustment provided the best results. Similar conclusions were drawn by Miyoshi et al. [201]. The radiometric block adjustment also provides the uniform mosaics of a dataset with different flights with varying solar azimuth and elevation [163]. Although the radiometric block adjustment can compensate for illumination fluctuations, it is important to note that the radiometric resolution might be decreased when the sensor is underexposed, and thus, differences in reflectance may not be sufficiently resolved for certain analysis [43].
The BRDF correction has mostly been carried out by means of empirical modeling. However, different surfaces (e.g., vegetation types) have different anisotropic behaviors, which makes empirical modeling challenging. With multiple overlapping images provided by 2D imagers, it is now possible to retrieve the BRDF of different surfaces and use it to normalize the data. Additionally, incorporating structural information is seen as a way forward [166]. As noted by Aasen et al. [42,43], the combination of 3D and spectral information derived by 2D imagers is potentially suited for this purpose. In addition, the anisotropy can also be used as a source of information [71,187].
New radiometric correction tools have been implemented in software packages for UAV image data processing. For example, Pix4D and Agisoft Photoscan offer options for radiometric correction, including sensor calibration-related corrections, irradiance-related correction utilizing irradiance information stored in the image EXIF-file, and sun direction-related correction for some cameras. Moreover, there is the option for radiometric calibration using reflectance panels. Additionally, some cameras, such as the Parrot Sequoia, have an integrated irradiance sensor and GNSS receiver. These are much-needed developments, since they ease the use of spectral sensors and help exploit the potential of UAVs to fly below the clouds.

5.4. Data Products from UAV Sensing Systems

Depending on the sensor configuration, different data products can be retrieved from UAV spectral sensing systems. Point spectroradiometers can measure distinct points (e.g., [25]) in space or integrated over an area of interest to get a coarse spatial representation of the spectral properties. With a specialized flying pattern and tilting of the sensor, a multi-angular characterization of an area can be generated [24].
Pushbroom systems allow generating a 2D spectral representation of the surface (e.g., [38,40,125]). These hyperspectral images can be overlaid on a digital surface model derived from LiDAR (e.g., [37]) or from RGB SfM. As a result, a spectral digital surface model is generated that represents the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface [43]. The 2D imagers directly allow the generation of spectral and 3D information at the same time, and thus derive spectral 3D point clouds and their derivative spectral digital surface models (c.f. Section 3.3 and e.g., [43,44,65]). Moreover, since the spectral information is implicitly connected to the 3D information of every point, 3D spectral point clouds can be generated with the same approach (e.g., Figure 6).
Additionally, the information from 2D imagers can also be used to generate 2D orthomosaics. However, in comparison with pushbroom systems, their orthorectification is likely to be more precise, since the geometry of the scene has implicitly been taken into account during the mosaicing process (if SfM was used). At the same time, the viewing geometries within the orthomosaics of 2D imagers are more complex than the ones in scenes from pushbroom sensing systems. This is due to the two dimensionality of the data and the high overlap between image frames. In every case where multiple images (or lines for pushbroom systems) overlap, a decision needs to be made on how to ‘blend’ or mosaic the data into a seamless orthomosaic. This decision has a significant impact on the final data product, as a study by Aasen and Bolten [42] shows.
We think that the ultra-high resolution of UAV images can be used for precise measurements needed by smart farming [203], agricultural [204] and phenotyping applications [42,205,206,207] and with the powerful tools provided by object-based image analysis (OBIA; [208,209]) for classification tasks (e.g., [48]). Additionally, UAVs allow mapping in terrain that is hard to access with proximal sensing methods such as mangroves or high canopies such as forests [72,73,74,201]. In addition, the combination of high-resolution 3D data and spectral data enables new segmentation, complementing, and combination approaches for data analysis [210], as shown e.g., in forests [72,73] and agriculture [211,212]. Still, approaches and algorithms that analyze the large amounts of multi-dimensional, high-resolution UAV data are a major bottleneck that should be focused on in future.

5.5. Quality Assurance and Metadata Information

The signal of an object is influenced at different stages before it is stored as a digital value in a data product. In the last sections, we looked at different sensors and their calibration, geometric, and radiometric processing, as well as influences of the environment, i.e., illumination conditions and the atmosphere. The keys to transforming data to useful information are the auxiliary and metadata. They support the interpretation of scientific data, and in general help to ensure long-term usability. Metadata provide a basis for the assessment of data quality and the possibility of data sharing and comparing between scientists [213]. It can be pixel, image (measurement), or scene-specific.
Important pixel specific metadata includes the signal-to-noise ratio [123] and an approximation of the radiometric resolution [43] that gives an indication of the quality of a pixel value stored in a data product. While both can be derived on the image level during the relative radiometric calibration (c.f. Section 4.3.1), their estimation for pixels in a scene can be complex, since it is modified when, e.g., information of two pixels is composed. For every measurement, the measurement time (to reconstruct the Sun’s position) and illumination conditions should be recorded. The latter would include qualitative information on the sky condition (clear or cloud-covered) and a direct–diffuse ratio, which could be derived from a shaded and a non-shaded reference panel. Additionally, the measurement geometry of the FOV or IFOV of every pixel, in case of imaging sensors, needs to be stored, since being in interaction with the illumination conditions, the measurement geometry has a significant influence on the data [24,42,161]. In imaging data, this is sometimes visible along the transition of mosaiced images. In this context, the influence of the data-processing scheme also needs to be taken into account [42]. Thus, metadata that describes how the data is processed needs to be generated for every scene. It should include the software and its version, as well as the parameters that were set during the processing. Software packages such as Agisoft Photoscan, Pix4D mapper, and open source tool MicMac [214] generate a report file after processing. These files could be provided as supplementary data in every publication. Other scene-based metadata include the information on method/protocol used to derive top-of-canopy reflectance (c.f. Section 4.4), as well as the sensors used in the study (including their band configuration and model number or manufacturing year, since some of the UAV sensors are manually manufactured and constantly improved, eventually making them unique). As described above, pixel and image (measurement)-specific metadata can improve the interpretability of the data and should be saved with the data as already done in airborne and satellite remote sensing [215,216]. To the best of our knowledge, a standard procedure for UAV remote sensing does not yet exist, but some researchers have implemented it into their work (e.g., [161,163] for the viewing geometry; [43] for the radiometric resolution, [162] to calculate uncertainty of the output HDRF observations via the image signal-to-noise ratio and reflectance transformation standard deviation; [42] to trace the information from the individual images into the data product). Scene-specific metadata can be stored in an additional file, similar to ENVI header files. Ideally, quantitative metadata parameters should also have an uncertainty assigned to them. Table 5 summarizes the mandatory and optional metadata for UAV remote sensing. We argue that at least the mandatory scene-based metadata should be stated in every publication or its supplementary material. With increasing resolution, additional factors need to be considered that were not visible in remote sensing data of coarser resolution. One example is wind and wind gusts that can have an influence on the spectral signature. Further studies need to evaluate such effects.

5.6. Comparability between Sensing Systems

UAVs have been envisaged to bridge the gap between classical ground, full-size aircraft, and satellite sensing systems. While UAV sensing systems are not per se different from other airborne sensing systems, differences between the sensing system may exist in sensor performance (due to miniaturization), calibration, data processing, measurement geometries (integrated FOV of non-imaging devices versus IFOV of imaging devices, nadir versus oblique), spatial and spectral resolution, and measurement timings (fixed time with satellite versus flexible UAV).
Several researchers have investigated the comparability of non-imaging ground and imaging and non-imaging UAV spectral data. Most found offsets, which they attributed to calibration issues [217,218,219]. Aasen and Bolten [42] systematically looked at the issue and found that the differences rather resulted from differences in the angular properties of the data. They defined the term specific field of view (SFOV) as a concept to understand the composition of pixels and their angular properties used to characterize a specific area of interest. This SFOV is influenced by the sensor’s FOV and the data processing, which explains the differences in the date captured by different sensors, eventually on different platforms, and processing [42]. Another study investigated the cross-validation of field and airborne spectroscopy data, and identified common sources of errors and uncertainties, as well as techniques to collect high-quality spectra under natural illumination conditions and highlighted the importance of appropriate metadata [167].
Other studies have compared UAV data to satellite observations, i.e., a comparison of Landsat 8 and two calibrated—one modified to NIR—Panasonic DMC-LX5 digital cameras showed that reflectance was not always consistent due to variable illumination conditions [92]. Burkhart et al. [27] compared a flight track of 210 km UAV nadir reflectance with the MODIS nadir BRDF-adjusted surface reflectance products of a dry snow region near Summit, Greenland. The data show that the UAV measurements were slightly higher than the MODIS NBARs for all of the bands, but agreed within their stated uncertainties. Tian et al. [55] compared UAV and WorldView-2 imagery for mapping a leaf area index. They found that the high resolution of UAV images was suitable to eliminate influences from the background in low leaf area index situations.
In addition, many other factors might affect the comparability. One example is the measurement duration of a method that might introduce additional artefacts: satellites may sample many square km in an instant, while it may take the whole day to sample the same area with ground-based measurements; the latter would introduce artefacts from the diurnal illumination change. Research on such subjects has only just started.

6. Conclusions—From Revolution to Maturity of UAV Spectral Remote Sensing

About five years ago, the lack of suited UAV multispectral cameras along with shortcomings related to UAV remote sensing including high initial costs, platform reliability, lack of standardized procedure to process large volumes of data, and strict aviation regulations were limiting the usability of UAV sensing systems. Today, several of these shortcomings have been overcome, or solutions have been proposed. UAV spectral sensing systems that can capture spectral data in high spectral, spatial, and temporal resolution are now available for a wide audience. We can conclude that the new era in remote sensing with unmanned flying robots is here and ready for widespread use in various application areas.
Overall, we expect that UAV spectral sensing systems will become common in the toolbox of researchers in quantitative remote sensing, forestry, agriculture, field phenotyping, ecology, and other fields that rely on environmental monitoring. We identified two trends. On one hand, sensors such as 2D snapshot multi-cameras systems Parrot Sequoia, MicaSense RedEdge and Maia S2 that are available below €4000, €5000 and €15,000, respectively, are built to be consumer-friendly. In combination with commercially built fixed-wing or rotary-wing UAVs, these cameras are becoming a powerful tool for researchers, as their rapid adaption shows, but also for service providers, breeding companies, and even farmers. On the other hand, the complex and potentially more flexible, configurable, and expensive sensors are still mostly used by people coming from the “core” remote sensing community. However, with companies such as HySpex and Specim formally focused on the development of airplane sensors entering the UAV market, we hope that professional systems will also drop in price. In the future, we hope to get a more detailed view on the developments with a survey on the state-of-the-art in UAV remote sensing (http://www.phenofly.net/uav-survey/) supported by the OPTIMSE community (https://optimise.dcs.aber.ac.uk/). Overall, we think that UAV sensing systems will not replace very high-quality plane or satellite systems, but rather complement them. UAV sensing systems allow us to go beyond classical remote sensing approaches, such as customized BRDF sampling or spectral 3D sensing. We are confident that more examples of such exciting UAV applications will appear in the future. Additionally, they provide us with information in higher spatial and temporal resolutions. The potentials of such data are yet to be explored, and novel approaches are needed to handle and analyze these data in more robust and meaningful ways. Three-dimensional hyperspectral data at high spatial and temporal resolution has the potential to provide exciting new insights into ecosystem composition and functioning, and address major research questions in these areas.
The important tasks now are to standardize procedures, develop algorithms, and explore the potential to make use of the large amounts of multi-dimensional, high spatial, temporal, and spectral resolution UAV data. In this review, we showed that many approaches exist, and identified best practice procedures to derive calibrated spectral data from UAV sensing systems. On the other hand, some uncertainties will always remain in data collection and processing. While the data quality might be sufficient for many use cases if the best practice calibration procedures are followed, for some applications, the absolute accuracy might be insufficient. Thus, it is critical to make metadata about the data quality traceable throughout all of the steps of data capturing and processing. In addition, the normalization of the dynamic illumination conditions in drone remote sensing still poses a problem. Although approaches exist, their development is rather recent, and they need to be applied frequently and further investigated in order to achieve the full potential of UAVs as flexible sensing platforms. Additionally, sensor manufacturers have a responsibility. When a sensor is sold, it should be shipped with a good absolute radiometric and spectral calibration and a description of its performance, including the accuracy.
With the ongoing revolution in spectral remote sensing, we cannot rely on others to ensure good data quality. As an emerging UAV remote sensing community, we are responsible for ensuring the good data quality of our data products, including metadata. This also includes reliable uncertainty propagation. Key to this effort is spreading knowledge and establishing good practices within the community. We think that the knowledge assembled in this review can be a good basis to mature UAV spectral sampling to a reliable source of scientific data to address old and new scientific questions from a novel perspective. Additionally, such efforts cannot be achieved by one researcher alone. Thus, we think that collaborative work such as in the European intergovernmental framework for cooperation in science and technology (COST) actions OPTIMSE (ES1309), HARMONIOUS (CA16219) and the newly approved action SENSECO (CA17134) are of great help toward establishing procedures for reliable quantitative remote sensing with UAVs.

Author Contributions

H.A. conceptualized the manuscript together with the other authors and wrote major parts of the manuscript. E.H. wrote major parts of Section 4 and Section 5.3, and contributed to the other parts of the manuscript. A.L. wrote major parts of Section 3 and Section 5.2, and contributed to the other parts of the manuscript. P.J.Z.-T. reviewed the manuscript, contributed text and figures, and supported the work through important discussions.

Funding

We would like to acknowledge a research scholarship to H.A. from the German Academic Exchange Service to visit A.L., which provided the base for this manuscript. E.H. acknowledges funding by Academy of Finland (grants No. 305994).

Acknowledgments

We acknowledge the support from OPTIMISE COST Action (ES1309). OPTIMISE made it possible to exchange knowledge and ideas among scientist at various conferences and workshops. We to thank the whole OPTIMISE community for the stimulation and in general great time together. In particular, we thank the participants of the workshop on “Best practice for UAV spectral sampling” in Tartu, Estonia, namely Alasdair Macarthur, Albert Porcar-Castell, Andreas Burkart, Andreas Hueni, Antonis Kavvadias, Dan Sporea, Enrico Tomelleri, Javier Pacheco, Joel Kuusk, Lea Hallik, Leonidas Toulios. We also would like to thank Stefan Livens for the great visualizations of the different sensing principles in Table 1 and his helpful comments on the manuscript. Additionally, we thank Georg Bareth of the GIS and Remote Sensing working group at the University of Cologne for his general support. Also, we would like to thank the three reviewers for their critical comments that helped to improve this manuscript. Finally, yet importantly, we would like to thank the 10th EARSeL SIG Imaging Spectroscopy workshop organizing committee for a great workshop and the organization of this Special Issue.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zarco-Tejada, P.J. A new era in remote sensing of crops with unmanned robots. SPIE Newsroom 2008. [Google Scholar] [CrossRef]
  2. Sanchez-Azofeifa, A.; Antonio Guzmán, J.; Campos, C.A.; Castro, S.; Garcia-Millan, V.; Nightingale, J.; Rankine, C. Twenty-first century remote sensing technologies are revolutionizing the study of tropical forests. Biotropica 2017, 49, 604–619. [Google Scholar] [CrossRef]
  3. Anderson, K.; Gaston, K.J. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef] [Green Version]
  4. McCabe, M.F.; Rodell, M.; Alsdorf, D.E.; Miralles, D.G.; Uijlenhoet, R.; Wagner, W.; Lucieer, A.; Houborg, R.; Verhoest, N.E.C.; Franz, T.E.; et al. The future of Earth observation in hydrology. Hydrol. Earth Syst. Sci. 2017, 21, 3879–3914. [Google Scholar] [CrossRef] [Green Version]
  5. Vivoni, E.R.; Rango, A.; Anderson, C.A.; Pierini, N.A.; Schreiner-McGraw, A.P.; Saripalli, S.; Laliberte, A.S. Ecohydrology with unmanned aerial vehicles. Ecosphere 2014, 5, art130. [Google Scholar] [CrossRef]
  6. Warner, T.A.; Cracknell, A.P. Unmanned aerial vehicles for environmental applications. Int. J. Remote Sens. 2017, 38, 2029–2036. [Google Scholar] [CrossRef] [Green Version]
  7. Toth, C.; Jóźków, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, 22–36. [Google Scholar] [CrossRef]
  8. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  9. Pajares, G. Overview and Current Status of Remote Sensing Applications Based on Unmanned Aerial Vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–330. [Google Scholar] [CrossRef]
  10. Salamí, E.; Barrado, C.; Pastor, E. UAV Flight Experiments Applied to the Remote Sensing of Vegetated Areas. Remote Sens. 2014, 6, 11051–11081. [Google Scholar] [CrossRef] [Green Version]
  11. Rango, A.; Laliberte, A.S.; Herrick, J.E.; Winters, C.; Havstad, K.; Steele, C.; Browning, D. Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 2009, 3, 033542. [Google Scholar] [CrossRef]
  12. Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef]
  13. Lunetta, R.S.; Congalton, R.G.; Fenstermaker, L.K.; Jensen, J.R.; McGwire, K.C.; Tinney, L.R. Remote sensing and geographic information system data integration: Error sources and research issues. Photogramm. Eng. Remote Sens. 1991, 57, 677–687. [Google Scholar]
  14. Schaepman, M.E. Spectrodirectional remote sensing: From pixels to processes. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 204–223. [Google Scholar] [CrossRef] [Green Version]
  15. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef]
  16. Pádua, L.; Vanko, J.; Hruška, J.; Adão, T.; Sousa, J.J.; Peres, E.; Morais, R. UAS, sensors, and data processing in agroforestry: A review towards practical applications. Int. J. Remote Sens. 2017, 38, 2349–2391. [Google Scholar] [CrossRef]
  17. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  18. Goetz, A.F.H.; Vane, G.; Solomon, J.E.; Rock, B.N. Imaging Spectrometry for Earth Remote Sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef] [PubMed]
  19. Goetz, A.F.H. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  20. Sellar, R.G.; Boreman, G.D. Classification of imaging spectrometers for remote sensing applications. Opt. Eng. 2005, 44, 013602. [Google Scholar] [CrossRef]
  21. Schickling, A.; Matveeva, M.; Damm, A.; Schween, J.; Wahner, A.; Graf, A.; Crewell, S.; Rascher, U. Combining Sun-Induced Chlorophyll Fluorescence and Photochemical Reflectance Index Improves Diurnal Modeling of Gross Primary Productivity. Remote Sens. 2016, 8, 574. [Google Scholar] [CrossRef]
  22. Burkart, A.; Cogliati, S.; Schickling, A.; Rascher, U. A Novel UAV-Based Ultra-Light Weight Spectrometer for Field Spectroscopy. IEEE Sens. J. 2014, 14, 62–67. [Google Scholar] [CrossRef]
  23. Ocean Optics, Inc. STS Series. Available online: https://oceanoptics.com/product-category/sts-series/ (accessed on 13 October 2017).
  24. Burkart, A.; Aasen, H.; Alonso, L.; Menz, G.; Bareth, G.; Rascher, U. Angular Dependency of Hyperspectral Measurements over Wheat Characterized by a Novel UAV Based Goniometer. Remote Sens. 2015, 7, 725–746. [Google Scholar] [CrossRef] [Green Version]
  25. Garzonio, R.; Mauro, B.D.; Colombo, R.; Cogliati, S. Surface Reflectance and Sun-Induced Fluorescence Spectroscopy Measurements Using a Small Hyperspectral UAS. Remote Sens. 2017, 9, 472. [Google Scholar] [CrossRef]
  26. Zeng, C.; Richardson, M.; King, D.J. The impacts of environmental variables on water reflectance measured using a lightweight unmanned aerial vehicle (UAV)-based spectrometer system. ISPRS J. Photogramm. Remote Sens. 2017, 130, 217–230. [Google Scholar] [CrossRef]
  27. Burkhart, J.F.; Kylling, A.; Schaaf, C.B.; Wang, Z.; Bogren, W.; Storvold, R.; Solbø, S.; Pedersen, C.A.; Gerland, S. Unmanned aerial system nadir reflectance and MODIS nadir BRDF-adjusted surface reflectances intercompared over Greenland. Cryosphere 2017, 11, 1575–1589. [Google Scholar] [CrossRef]
  28. Zeng, C.; King, D.J.; Richardson, M.; Shan, B. Fusion of Multispectral Imagery and Spectrometer Data in UAV Remote Sensing. Remote Sens. 2017, 9, 696. [Google Scholar] [CrossRef]
  29. Link, J.; Senner, D.; Claupein, W. Developing and evaluating an aerial sensor platform (ASP) to collect multispectral data for deriving management decisions in precision farming. Comput. Electron. Agric. 2013, 94, 20–28. [Google Scholar] [CrossRef]
  30. Shang, S.; Lee, Z.; Lin, G.; Hu, C.; Shi, L.; Zhang, Y.; Li, X.; Wu, J.; Yan, J. Sensing an intense phytoplankton bloom in the western Taiwan Strait from radiometric measurements on a UAV. Remote Sens. Environ. 2017, 198, 85–94. [Google Scholar] [CrossRef]
  31. Uto, K.; Seki, H.; Saito, G.; Kosugi, Y.; Komatsu, T. Development of a Low-Cost Hyperspectral Whiskbroom Imager Using an Optical Fiber Bundle, a Swing Mirror, and Compact Spectrometers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3909–3925. [Google Scholar] [CrossRef]
  32. Jones, H.G.; Vaughan, R.A. Remote Sensing of Vegetation: Principles, Techniques, and Applications; Oxford University Press: Oxford, UK; New York, NY, USA, 2010; ISBN 978-0-19-920779-4. [Google Scholar]
  33. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef] [Green Version]
  34. Headwall Photonics Inc. Micro-Hyperspec Airborne Sensors. Available online: http://www.headwallphotonics.com/spectral-imaging/hyperspectral/micro-hyperspec (accessed on 16 April 2016).
  35. Calderón, R.; Navas-Cortés, J.A.; Lucena, C.; Zarco-Tejada, P.J. High-resolution airborne hyperspectral and thermal imagery for early detection of Verticillium wilt of olive using fluorescence, temperature and narrow-band spectral indices. Remote Sens. Environ. 2013, 139, 231–245. [Google Scholar] [CrossRef] [Green Version]
  36. Zarco-Tejada, P.J.; Morales, A.; Testi, L.; Villalobos, F.J. Spatio-temporal patterns of chlorophyll fluorescence and physiological and structural indices acquired from hyperspectral imagery as compared with carbon fluxes measured with eddy covariance. Remote Sens. Environ. 2013, 133, 102–115. [Google Scholar] [CrossRef] [Green Version]
  37. Sankey, T.; Donager, J.; McVay, J.; Sankey, J.B. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar] [CrossRef]
  38. Lucieer, A.; Malenovský, Z.; Veness, T.; Wallace, L. HyperUAS-Imaging Spectroscopy from a Multirotor Unmanned Aircraft System: HyperUAS-Imaging Spectroscopy from a Multirotor Unmanned. J. Field Robot. 2014, 31, 571–590. [Google Scholar] [CrossRef]
  39. Malenovský, Z.; Lucieer, A.; King, D.H.; Turnbull, J.D.; Robinson, S.A. Unmanned aircraft system advances health mapping of fragile polar vegetation. Methods Ecol. Evol. 2017, 8, 1842–1857. [Google Scholar] [CrossRef]
  40. Suomalainen, J.; Anders, N.; Iqbal, S.; Roerink, G.; Franke, J.; Wenting, P.; Hünniger, D.; Bartholomeus, H.; Becker, R.; Kooistra, L. A Lightweight Hyperspectral Mapping System and Photogrammetric Processing Chain for Unmanned Aerial Vehicles. Remote Sens. 2014, 6, 11013–11030. [Google Scholar] [CrossRef] [Green Version]
  41. HySpex. HySpex Mjolnir V-1240. Available online: https://www.hyspex.no/products/mjolnir.php (accessed on 15 November 2017).
  42. Aasen, H.; Bolten, A. Multi-temporal high-resolution imaging spectroscopy with hyperspectral 2D imagers—From theory to application. Remote Sens. Environ. 2018, 205, 374–389. [Google Scholar] [CrossRef]
  43. Aasen, H.; Burkart, A.; Bolten, A.; Bareth, G. Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance. ISPRS J. Photogramm. Remote Sens. 2015, 108, 245–259. [Google Scholar] [CrossRef]
  44. Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J.; Pesonen, L. Processing and Assessment of Spectrometric, Stereoscopic Imagery Collected Using a Lightweight UAV Spectral Camera for Precision Agriculture. Remote Sens. 2013, 5, 5006–5039. [Google Scholar] [CrossRef] [Green Version]
  45. Berni, J.A.J.; Zarco-Tejada, P.J.; Sepulcre-Cantó, G.; Fereres, E.; Villalobos, F. Mapping canopy conductance and CWSI in olive orchards using high resolution thermal remote sensing imagery. Remote Sens. Environ. 2009, 113, 2380–2388. [Google Scholar] [CrossRef]
  46. Stagakis, S.; González-Dugo, V.; Cid, P.; Guillén-Climent, M.L.; Zarco-Tejada, P.J. Monitoring water stress and fruit quality in an orange orchard under regulated deficit irrigation using narrow-band structural and physiological remote sensing indices. ISPRS J. Photogramm. Remote Sens. 2012, 71, 47–61. [Google Scholar] [CrossRef] [Green Version]
  47. Suárez, L.; Zarco-Tejada, P.J.; Berni, J.A.J.; González-Dugo, V.; Fereres, E. Modelling PRI for water stress detection using radiative transfer models. Remote Sens. Environ. 2009, 113, 730–744. [Google Scholar] [CrossRef] [Green Version]
  48. Torres-Sánchez, J.; López-Granados, F.; Peña, J.M. An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops. Comput. Electron. Agric. 2015, 114, 43–52. [Google Scholar] [CrossRef] [Green Version]
  49. Pérez-Ortiz, M.; Peña, J.M.; Gutiérrez, P.A.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. A semi-supervised system for weed mapping in sunflower crops using unmanned aerial vehicles and a crop row detection method. Appl. Soft Comput. 2015, 37, 533–544. [Google Scholar] [CrossRef] [Green Version]
  50. Kelcey, J.; Lucieer, A. Sensor Correction of a 6-Band Multispectral Imaging Sensor for UAV Remote Sensing. Remote Sens. 2012, 4, 1462–1493. [Google Scholar] [CrossRef] [Green Version]
  51. MicaSense. Parrot Sequoia. Available online: https://www.micasense.com/parrotsequoia/ (accessed on 14 December 2017).
  52. MicaSense. RedEdge-M. Available online: https://www.micasense.com/rededge-m/ (accessed on 14 December 2017).
  53. SAL. Engineering MAIA—The Multispectral Camera. Available online: http://www.spectralcam.com/ (accessed on 14 December 2017).
  54. Dash, J.P.; Watt, M.S.; Pearse, G.D.; Heaphy, M.; Dungey, H.S. Assessing very high resolution UAV imagery for monitoring forest health during a simulated disease outbreak. ISPRS J. Photogramm. Remote Sens. 2017, 131, 1–14. [Google Scholar] [CrossRef]
  55. Tian, J.; Wang, L.; Li, X.; Gong, H.; Shi, C.; Zhong, R.; Liu, X. Comparison of UAV and WorldView-2 imagery for mapping leaf area index of mangrove forest. Int. J. Appl. Earth Obs. Geoinf. 2017, 61, 22–31. [Google Scholar] [CrossRef]
  56. Albetis, J.; Duthoit, S.; Guttler, F.; Jacquin, A.; Goulard, M.; Poilvé, H.; Féret, J.-B.; Dedieu, G. Detection of Flavescence dorée Grapevine Disease Using Unmanned Aerial Vehicle (UAV) Multispectral Imagery. Remote Sens. 2017, 9, 308. [Google Scholar] [CrossRef]
  57. Zarco-Tejada, P.J.; González-Dugo, V.; Williams, L.E.; Suárez, L.; Berni, J.A.J.; Goldhamer, D.; Fereres, E. A PRI-based water stress index combining structural and chlorophyll effects: Assessment using diurnal narrow-band airborne imagery and the CWSI thermal index. Remote Sens. Environ. 2013, 138, 38–50. [Google Scholar] [CrossRef] [Green Version]
  58. Geipel, J.; Link, J.; Wirwahn, J.; Claupein, W. A Programmable Aerial Multispectral Camera System for In-Season Crop Biomass and Nitrogen Content Estimation. Agriculture 2016, 6, 4. [Google Scholar] [CrossRef]
  59. Honkavaara, E.; Arbiol, R.; Markelin, L.; Martinez, L.; Cramer, M.; Bovet, S.; Chandelier, L.; Ilves, R.; Klonus, S.; Marshal, P.; et al. Digital Airborne Photogrammetry—A New Tool for Quantitative Remote Sensing?—A State-of-the-Art Review On Radiometric Aspects of Digital Photogrammetric Images. Remote Sens. 2009, 1, 577–605. [Google Scholar] [CrossRef] [Green Version]
  60. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Tanhuanpää, T.; Holopainen, M. Using UAV-Based Photogrammetry and Hyperspectral Imaging for Mapping Bark Beetle Damage at Tree-Level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef] [Green Version]
  61. SENOP. Optronics Hyperspectral. Available online: http://senop.fi/en/optronics-hyperspectral (accessed on 11 October 2017).
  62. Mäkynen, J.; Holmlund, C.; Saari, H.; Ojala, K.; Antila, T. Unmanned aerial vehicle (UAV) operated megapixel spectral camera. In International Society for Optics and Photonics; Kamerman, G.W., Steinvall, O., Bishop, G.J., Gonglewski, J.D., Lewis, K.L., Hollins, R.C., Merlet, T.J., Eds.; SPIE: Bellingham, WA, USA, 2011; p. 81860Y. [Google Scholar]
  63. De Oliveira, R.A.; Tommaselli, A.M.G.; Honkavaara, E. Geometric Calibration of a Hyperspectral Frame Camera. Photogramm. Rec. 2016, 31, 325–347. [Google Scholar] [CrossRef]
  64. Näsilä, A. Aalto-1 -satelliitin spektrikamerateknologian validointi avaruusympäristöön Validation of Aalto-1 Spectral Imager Technology to Space Environment; G2 Pro gradu, diplomityö, Aalto University: Helsinki, Finland, 2013. [Google Scholar]
  65. Honkavaara, E.; Eskelinen, M.A.; Polonen, I.; Saari, H.; Ojanen, H.; Mannila, R.; Holmlund, C.; Hakala, T.; Litkey, P.; Rosnell, T.; et al. Remote Sensing of 3-D Geometry and Surface Moisture of a Peat Production Area Using Hyperspectral Frame Cameras in Visible to Short-Wave Infrared Spectral Ranges Onboard a Small Unmanned Airborne Vehicle (UAV). IEEE Trans. Geosci. Remote Sens. 2016, 54, 5440–5454. [Google Scholar] [CrossRef]
  66. Mannila, R.; Holmlund, C.; Ojanen, H.J.; Näsilä, A.; Saari, H. Short-Wave Infrared (SWIR) Spectral Imager Based on Fabry-Perot Interferometer for Remote Sensing; Meynart, R., Neeck, S.P., Shimoda, H., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 2014; p. 92411M. [Google Scholar]
  67. Honkavaara, E.; Rosnell, T.; Oliveira, R.; Tommaselli, A. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes. ISPRS J. Photogramm. Remote Sens. 2017, 134, 96–109. [Google Scholar] [CrossRef]
  68. Jakob, S.; Zimmermann, R.; Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sens. 2017, 9, 88. [Google Scholar] [CrossRef]
  69. Moriya, E.A.S.; Imai, N.N.; Tommaselli, A.M.G.; Miyoshi, G.T. Mapping Mosaic Virus in Sugarcane Based on Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 740–748. [Google Scholar] [CrossRef]
  70. Roosjen, P.; Suomalainen, J.; Bartholomeus, H.; Kooistra, L.; Clevers, J. Mapping Reflectance Anisotropy of a Potato Canopy Using Aerial Images Acquired with an Unmanned Aerial Vehicle. Remote Sens. 2017, 9, 417. [Google Scholar] [CrossRef]
  71. Roosjen, P.P.J.; Brede, B.; Suomalainen, J.M.; Bartholomeus, H.M.; Kooistra, L.; Clevers, J.G.P.W. Improved estimation of leaf area index and leaf chlorophyll content of a potato crop using multi-angle spectral data—Potential of unmanned aerial vehicle imagery. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 14–26. [Google Scholar] [CrossRef]
  72. Nevalainen, O.; Honkavaara, E.; Tuominen, S.; Viljanen, N.; Hakala, T.; Yu, X.; Hyyppä, J.; Saari, H.; Pölönen, I.; Imai, N.; et al. Individual Tree Detection and Classification with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2017, 9, 185. [Google Scholar] [CrossRef]
  73. Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M.; Luoma, V.; Tommaselli, A.; Imai, N.; et al. Assessing Biodiversity in Boreal Forests with UAV-Based Photogrammetric Point Clouds and Hyperspectral Imaging. Remote Sens. 2018, 10, 338. [Google Scholar] [CrossRef]
  74. Tuominen, S.; Balazs, A.; Honkavaara, E.; Pölönen, I.; Saari, H.; Hakala, T.; Viljanen, N. Hyperspectral UAV-imagery and photogrammetric canopy height model in estimating forest stand variables. Silva Fenn. 2017, 51, 7721. [Google Scholar] [CrossRef]
  75. Näsi, R.; Honkavaara, E.; Blomqvist, M.; Lyytikäinen-Saarenmaa, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Holopainen, M. Remote sensing of bark beetle damage in urban forests at individual tree level using a novel hyperspectral camera from UAV and aircraft. Urban For. Urban Green. 2018, 30, 72–83. [Google Scholar] [CrossRef]
  76. Hagen, N.; Kester, R.T.; Gao, L.; Tkaczyk, T.S. Snapshot advantage: A review of the light collection improvement for parallel high-dimensional measurement systems. Opt. Eng. 2012, 51, 111702. [Google Scholar] [CrossRef] [PubMed]
  77. Hagen, N.; Kudenov, M.W. Review of snapshot spectral imaging technologies. Opt. Eng. 2013, 52, 090901. [Google Scholar] [CrossRef] [Green Version]
  78. Jung, A.; Michels, R.; Graser, R. Hyperspectral camera with spatial and spectral resolution and method EP2944930A3. 23 December 2015. [Google Scholar]
  79. Cubert. GmbH UHD 185—Firefly. Available online: http://cubert-gmbh.de/uhd-185-firefly/ (accessed on 21 April 2016).
  80. Yuan, H.; Yang, G.; Li, C.; Wang, Y.; Liu, J.; Yu, H.; Feng, H.; Xu, B.; Zhao, X.; Yang, X. Retrieving Soybean Leaf Area Index from Unmanned Aerial Vehicle Hyperspectral Remote Sensing: Analysis of RF, ANN, and SVM Regression Models. Remote Sens. 2017, 9, 309. [Google Scholar] [CrossRef]
  81. IMEC. Hyperspectral Imaging. Available online: https://www.imec-int.com/en/hyperspectral-imaging (accessed on 11 October 2017).
  82. Lambrechts, A.; Gonzalez, P.; Geelen, B.; Soussan, P.; Tack, K.; Jayapala, M. A CMOS-compatible, integrated approach to hyper- and multispectral imaging. In Proceedings of the 2014 IEEE International Electron Devices Meeting, San Francisco, CA, USA, 15–17 December 2014; IEEE: Piscataway, NJ, USA, 2014. [Google Scholar]
  83. Cubert. GmbH Butterfly X2 Announced. Available online: http://cubert-gmbh.de/2016/02/11/butterfly-x2-announced/ (accessed on 10 April 2016).
  84. Photon Focus. AG Hyperspectral Cameras. Available online: http://www.photonfocus.com/de/produkte/kamerafinder/?no_cache=1&cid=9&pfid=2 (accessed on 11 October 2017).
  85. Constantin, D.; Rehak, M.; Akhtman, Y.; Liebisch, F. Detection of Crop Properties by Means of Hyperspectral Remote Sensing from a Micro UAV. Available online: https://www.researchgate.net/publication/301920193_Detection_of_crop_properties_by_means_of_hyperspectral_remote_sensing_from_a_micro_UAV (accessed on 9 July 2018).
  86. Khanna, R.; Sa, I.; Nieto, J.; Siegwart, R. On Field Radiometric Calibration for Multispectral Cameras. Available online: https://ieeexplore.ieee.org/document/7989768/ (accessed on 9 July 2018).
  87. Mihoubi, S.; Losson, O.; Mathon, B.; Macaire, L. Multispectral Demosaicing Using Pseudo-Panchromatic Image. IEEE Trans. Comput. Imaging 2017, 3, 982–995. [Google Scholar] [CrossRef] [Green Version]
  88. IMEC. Imec Demonstrates Shortwave Infrared (SWIR) Range Hyperspectral Imaging Camera. Available online: https://www.imec-int.com/en/articles/imec-demonstrates-shortwave-infrared-swir-range-hyperspectral-imaging-camera (accessed on 1 March 2018).
  89. Delaure, B. Cubert and VITO Remote Sensing Introduced Compact Hyperspectral COSI-cam at EGU 2016. Available online: https://vito.be/en/news-events/news/cubert-and-vito-remote-sensing-introduced-compact-hyperspectral-cosi-cam-at-egu-2016 (accessed on 14 December 2017).
  90. Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B. A spatio-spectral camera for high resolution hyperspectral imaging. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 223–228. [Google Scholar] [CrossRef]
  91. Sima, A.A.; Baeck, P.; Nuyts, D.; Delalieux, S.; Livens, S.; Blommaert, J.; Delauré, B.; Boonen, M. Compact Hyperspectral Imaging System (cosi) for Small Remotely Piloted Aircraft Systems (rpas)—System Overview and First Performance Evaluation Results. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1, 1157–1164. [Google Scholar] [CrossRef]
  92. Berra, E.F.; Gaulton, R.; Barr, S. Commercial Off-the-Shelf Digital Cameras on Unmanned Aerial Vehicles for Multitemporal Monitoring of Vegetation Reflectance and NDVI. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4878–4886. [Google Scholar] [CrossRef]
  93. Adobe Systems Incorporated. Digital Negative (DNG), Adobe DNG Converter | Adobe Photoshop CC. Available online: https://helpx.adobe.com/photoshop/digital-negative.html (accessed on 1 March 2018).
  94. Harwin, S.; Lucieer, A.; Osborn, J. The Impact of the Calibration Method on the Accuracy of Point Clouds Derived Using Unmanned Aerial Vehicle Multi-View Stereopsis. Remote Sens. 2015, 7, 11933–11953. [Google Scholar] [CrossRef] [Green Version]
  95. Wallace, L.; Lucieer, A.; Malenovský, Z.; Turner, D.; Vopěnka, P. Assessment of Forest Structure Using Two UAV Techniques: A Comparison of Airborne Laser Scanning and Structure from Motion (SfM) Point Clouds. Forests 2016, 7, 62. [Google Scholar] [CrossRef]
  96. Jagt, B.; Lucieer, A.; Wallace, L.; Turner, D.; Durand, M. Snow Depth Retrieval with UAS Using Photogrammetric Techniques. Geosciences 2015, 5, 264–285. [Google Scholar] [CrossRef]
  97. Turner, D.; Lucieer, A.; Wallace, L. Direct Georeferencing of Ultrahigh-Resolution UAV Imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2738–2745. [Google Scholar] [CrossRef]
  98. Gautam, D.; Lucieer, A.; Malenovský, Z.; Watson, C. Comparison of MEMS-Based and FOG-Based IMUs to Determine Sensor Pose on an Unmanned Aircraft System. J. Surv. Eng. 2017, 143, 04017009. [Google Scholar] [CrossRef]
  99. Remondino, F.; El-Hakim, S. Image-based 3D Modelling: A Review. Photogramm. Rec. 2006, 21, 269–291. [Google Scholar] [CrossRef]
  100. Szeliski, R. Computer Vision; Texts in Computer Science; Springer: London, UK, 2011; ISBN 978-1-84882-934-3. [Google Scholar]
  101. Mac Arthur, A.; MacLellan, C.J.; Malthus, T. The Fields of View and Directional Response Functions of Two Field Spectroradiometers. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3892–3907. [Google Scholar] [CrossRef]
  102. Richter, R.; Schläpfer, D. Geo-atmospheric processing of airborne imaging spectrometry data. Part 2: Atmospheric/topographic correction. Int. J. Remote Sens. 2002, 23, 2631–2649. [Google Scholar] [CrossRef]
  103. Turner, D.; Lucieer, A.; McCabe, M.; Parkes, S.; Clarke, I. PUSHBROOM HYPERSPECTRAL IMAGING FROM AN UNMANNED AIRCRAFT SYSTEM (UAS)—GEOMETRIC PROCESSINGWORKFLOW AND ACCURACY ASSESSMENT. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W6, 379–384. [Google Scholar] [CrossRef]
  104. Baiocchi, V.; Dominici, D.; Milone, M.V.; Mormile, M. Development of a software to optimize and plan the acquisitions from UAV and a first application in a post-seismic environment. Eur. J. Remote Sens. 2014, 47, 477–496. [Google Scholar] [CrossRef] [Green Version]
  105. Turner, D.; Lucieer, A.; Watson, C. An Automated Technique for Generating Georectified Mosaics from Ultra-High Resolution Unmanned Aerial Vehicle (UAV) Imagery, Based on Structure from Motion (SfM) Point Clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef] [Green Version]
  106. Weiss, S.; Scaramuzza, D.; Siegwart, R. Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments. J. Field Robot. 2011, 28, 854–874. [Google Scholar] [CrossRef]
  107. Habib, A.; Han, Y.; Xiong, W.; He, F.; Zhang, Z.; Crawford, M. Automated Ortho-Rectification of UAV-Based Hyperspectral Data over an Agricultural Field Using Frame RGB Imagery. Remote Sens. 2016, 8, 796. [Google Scholar] [CrossRef]
  108. Ramirez-Paredes, J.-P.; Lary, D.J.; Gans, N.R. Low-altitude Terrestrial Spectroscopy from a Pushbroom Sensor. J. Field Robot. 2016, 33, 837–852. [Google Scholar] [CrossRef]
  109. Dawn, S.; Saxena, V.; Sharma, B. Remote Sensing Image Registration Techniques: A Survey. In Image and Signal Processing; Elmoataz, A., Lezoray, O., Nouboud, F., Mammass, D., Meunier, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6134, pp. 103–112. ISBN 978-3-642-13680-1. [Google Scholar]
  110. Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry; Wiley: New York, NY, USA, 2001; ISBN 978-0-471-30924-6. [Google Scholar]
  111. Jhan, J.-P.; Rau, J.-Y.; Huang, C.-Y. Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS. ISPRS J. Photogramm. Remote Sens. 2016, 114, 66–77. [Google Scholar] [CrossRef]
  112. Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral Remote Sensing from Unmanned Aircraft: Image Processing Workflows and Applications for Rangeland Environments. Remote Sens. 2011, 3, 2529–2551. [Google Scholar] [CrossRef] [Green Version]
  113. Torres-Sánchez, J.; López-Granados, F.; De Castro, A.I.; Peña-Barragán, J.M. Configuration and Specifications of an Unmanned Aerial Vehicle (UAV) for Early Site Specific Weed Management. PLoS ONE 2013, 8, e58210. [Google Scholar] [CrossRef] [PubMed]
  114. Turner, D.; Lucieer, A.; Malenovský, Z.; King, D.; Robinson, S. Spatial Co-Registration of Ultra-High Resolution Visible, Multispectral and Thermal Images Acquired with a Micro-UAV over Antarctic Moss Beds. Remote Sens. 2014, 6, 4003–4024. [Google Scholar] [CrossRef] [Green Version]
  115. Vakalopoulou, M.; Karantzalos, K. Automatic Descriptor-Based Co-Registration of Frame Hyperspectral Data. Remote Sens. 2014, 6, 3409–3426. [Google Scholar] [CrossRef] [Green Version]
  116. Schott, J.R. Remote sensing: The Image Chain Approach, 2nd ed.; Oxford University Press: New York, NY, USA, 2007; ISBN 978-0-19-517817-3. [Google Scholar]
  117. Nicodemus, F.E.; Richmond, J.C.; Hsia, J.J.; Ginsberg, I.W.; Limperis, T. Geometrical Considerations and Nomenclature for Reflectance; National Bureau of Standards: Washington DC, WA, USA, 1977; p. 67.
  118. Schaepman-Strub, G.; Schaepman, M.E.; Painter, T.H.; Dangel, S.; Martonchik, J.V. Reflectance quantities in optical remote sensing—definitions and case studies. Remote Sens. Environ. 2006, 103, 27–42. [Google Scholar] [CrossRef]
  119. Schläpfer, D.; Richter, R.; Feingersh, T. Operational BRDF Effects Correction for Wide-Field-of-View Optical Scanners (BREFCOR). IEEE Trans. Geosci. Remote Sens. 2015, 53, 1855–1864. [Google Scholar] [CrossRef] [Green Version]
  120. Gege, P.; Fries, J.; Haschberger, P.; Schötz, P.; Schwarzer, H.; Strobl, P.; Suhr, B.; Ulbrich, G.; Jan Vreeling, W. Calibration facility for airborne imaging spectrometers. ISPRS J. Photogramm. Remote Sens. 2009, 64, 387–397. [Google Scholar] [CrossRef]
  121. Sandau, R. (Ed.) Digital Airborne Camera: Introduction and Technology; Springer: Dordrecht, The Netherlands; New York, NY, USA, 2010; ISBN 978-1-4020-8877-3. [Google Scholar]
  122. Schowengerdt, R.A. Remote Sensing, Models, and Methods for Image Processing, 3rd ed.; Academic Press: Burlington, MA, USA, 2007; ISBN 978-0-12-369407-2. [Google Scholar]
  123. Jablonski, J.; Durell, C.; Slonecker, T.; Wong, K.; Simon, B.; Eichelberger, A.; Osterberg, J. Best Practices in Passive Remote Sensing VNIR Hyperspectral System Hardware Calibrations; Bannon, D.P., Ed.; International Society for Optics and Photonics: Bellingham, WA, USA, 2016; p. 986004. [Google Scholar]
  124. Yoon, H.W.; Kacker, R.N. Guidelines for Radiometric Calibration of Electro-Optical Instruments for Remote Sensing; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2015.
  125. Zarco-Tejada, P.J.; González-Dugo, V.; Berni, J.A.J. Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. Environ. 2012, 117, 322–337. [Google Scholar] [CrossRef] [Green Version]
  126. Büttner, A.; Röser, H.-P. Hyperspectral Remote Sensing with the UAS “Stuttgarter Adler”—System Setup, Calibration and First Results. Photogramm. Fernerkund. Geoinf. 2014, 2014, 265–274. [Google Scholar] [CrossRef] [PubMed]
  127. Aasen, H.; Bendig, J.; Bolten, A.; Bennertz, S.; Willkomm, M.; Bareth, G. Introduction and preliminary results of a calibration for full-frame hyperspectral cameras to monitor agricultural crops with UAVs. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-7, 1–8. [Google Scholar] [CrossRef]
  128. Brachmann, J.F.S.; Baumgartner, A.; Lenhard, K. Calibration Procedures for Imaging Spectrometers: Improving Data Quality from Satellite Missions to UAV Campaigns; Meynart, R., Neeck, S.P., Kimura, T., Shimoda, H., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 2016; p. 1000010. [Google Scholar]
  129. Yang, G.; Li, C.; Wang, Y.; Yuan, H.; Feng, H.; Xu, B.; Yang, X. The DOM Generation and Precise Radiometric Calibration of a UAV-Mounted Miniature Snapshot Hyperspectral Imager. Remote Sens. 2017, 9, 642. [Google Scholar] [CrossRef]
  130. Berni, J.; Zarco-Tejada, P.J.; Suarez, L.; Fereres, E. Thermal and Narrowband Multispectral Remote Sensing for Vegetation Monitoring from an Unmanned Aerial Vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef] [Green Version]
  131. Del Pozo, S.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; Felipe-García, B. Vicarious Radiometric Calibration of a Multispectral Camera on Board an Unmanned Aerial System. Remote Sens. 2014, 6, 1918–1937. [Google Scholar] [CrossRef] [Green Version]
  132. Goldman, D.B. Vignette and Exposure Calibration and Compensation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2276–2288. [Google Scholar] [CrossRef] [PubMed]
  133. Kim, S.J.; Pollefeys, M. Robust Radiometric Calibration and Vignetting Correction. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 562–576. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  134. Wonpil Yu Practical anti-vignetting methods for digital cameras. IEEE Trans. Consum. Electron. 2004, 50, 975–983. [CrossRef]
  135. Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D. GEOMETRIC CALIBRATION AND RADIOMETRIC CORRECTION OF THE MAIA MULTISPECTRAL CAMERA. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-3/W3, 149–156. [Google Scholar] [CrossRef]
  136. Beisl, U. Absolute spectroradiometric calibration of the ADS40 sensor. In Proceedings of the ISPRS Commission I Symposium “From Sensors to Imagery”, Marne-la-Vallée, Paris, 4–6 July 2006; pp. 3–6. [Google Scholar]
  137. D’Odorico, P.; Schaepman, M. Monitoring the Spectral Performance of the APEX Imaging Spectrometer for Inter-Calibration of Satellite Missions; Remote Sensing Laboratories, Department of Geography, University of Zurich: Zurich, Switzerland, 2012. [Google Scholar]
  138. Liu, Y.; Wang, T.; Ma, L.; Wang, N. Spectral Calibration of Hyperspectral Data Observed From a Hyperspectrometer Loaded on an Unmanned Aerial Vehicle Platform. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2630–2638. [Google Scholar] [CrossRef]
  139. Busetto, L.; Meroni, M.; Crosta, G.F.; Guanter, L.; Colombo, R. SpecCal: Novel software for in-field spectral characterization of high-resolution spectrometers. Comput. Geosci. 2011, 37, 1685–1691. [Google Scholar] [CrossRef]
  140. Berk, A.; Anderson, G.P.; Acharya, P.K.; Bernstein, L.S.; Muratov, L.; Lee, J.; Fox, M.; Adler-Golden, S.M.; Chetwynd, J.H.; Hoke, M.L.; et al. MODTRAN 5: A Reformulated Atmospheric Band Model with Auxiliary Species and Practical Multiple Scattering Options: Update; Shen, S.S., Lewis, P.E., Eds.; International Society for Optics and Photonics: Bellingham, WA, USA, 2005; p. 662. [Google Scholar]
  141. Ryan, R.E.; Pagnutti, M. Enhanced absolute and relative radiometric calibration for digital aerial cameras. In Photogrammetric Week ’09: Keynote and Invited Papers of the 100th Anniversary of the Photogrammetric Week Series (52nd Photogrammetric Week) held at Universitaet Stuttgart, September 7 to 11, 2009; Woche, P., Fritsch, D., Eds.; Wichmann: Heidelberg, Germany, 2009; pp. 81–90. ISBN 978-3-87907-483-9. [Google Scholar]
  142. Jehle, M.; Hueni, A.; Lenhard, K.; Baumgartner, A.; Schaepman, M.E. Detection and Correction of Radiance Variations During Spectral Calibration in APEX. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1023–1027. [Google Scholar] [CrossRef] [Green Version]
  143. Gueymard, C. SMARTS2, A Simple Model of the Atmospheric Radiative Transfer of Sunshine: Algorithms and performance assessment; Florida Solar Energy Ce nter/University of Central Florida: Cocoa, FL, USA, 1995; p. 84. [Google Scholar]
  144. Suomalainen, J.; Hakala, T.; Peltoniemi, J.; Puttonen, E. Polarised Multiangular Reflectance Measurements Using the Finnish Geodetic Institute Field Goniospectrometer. Sensors 2009, 9, 3891–3907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  145. Julitta, T. Optical Proximal Sensing for Vegetation Monitoring; University of Milano—Bicocca: Milano, Italy, 2015. [Google Scholar]
  146. Pacheco-Labrador, J.; Martín, M. Characterization of a Field Spectroradiometer for Unattended Vegetation Monitoring. Key Sensor Models and Impacts on Reflectance. Sensors 2015, 15, 4154–4175. [Google Scholar] [CrossRef] [PubMed]
  147. Bais, A.F.; Kazadzis, S.; Balis, D.; Zerefos, C.S.; Blumthaler, M. Correcting global solar ultraviolet spectra recorded by a Brewer spectroradiometer for its angular response error. Appl. Opt. 1998, 37, 6339. [Google Scholar] [CrossRef] [PubMed]
  148. Emde, C.; Buras-Schnell, R.; Kylling, A.; Mayer, B.; Gasteiger, J.; Hamann, U.; Kylling, J.; Richter, B.; Pause, C.; Dowling, T.; et al. The libRadtran software package for radiative transfer calculations (version 2.0.1). Geosci. Model Dev. 2016, 9, 1647–1672. [Google Scholar] [CrossRef] [Green Version]
  149. Mayer, B.; Kylling, A. Technical note: The libRadtran software package for radiative transfer calculations—Description and examples of use. Atmos. Chem. Phys. 2005, 5, 1855–1877. [Google Scholar] [CrossRef]
  150. Bogren, W.S.; Burkhart, J.F.; Kylling, A. Tilt error in cryospheric surface radiation measurements at high latitudes: A model study. Cryosphere 2016, 10, 613–622. [Google Scholar] [CrossRef]
  151. Anderson, K.; Milton, E.J.; Rollin, E.M. Calibration of dual-beam spectroradiometric data. Int. J. Remote Sens. 2006, 27, 975–986. [Google Scholar] [CrossRef]
  152. Smith, G.M.; Milton, E.J. The use of the empirical line method to calibrate remotely sensed data to reflectance. Int. J. Remote Sens. 1999, 20, 2653–2662. [Google Scholar] [CrossRef]
  153. Wang, C.; Myint, S.W. A Simplified Empirical Line Method of Radiometric Calibration for Small Unmanned Aircraft Systems-Based Remote Sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1876–1885. [Google Scholar] [CrossRef]
  154. Lu, B.; He, Y. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland. ISPRS J. Photogramm. Remote Sens. 2017, 128, 73–85. [Google Scholar] [CrossRef]
  155. Torres-Sánchez, J.; Peña, J.M.; de Castro, A.I.; López-Granados, F. Multi-temporal mapping of the vegetation fraction in early-season wheat fields using images from UAV. Comput. Electron. Agric. 2014, 103, 104–113. [Google Scholar] [CrossRef] [Green Version]
  156. Markelin, L.; Honkavaara, E.; Schläpfer, D.; Bovet, S.; Korpela, I. Assessment of Radiometric Correction Methods for ADS40 Imagery. Photogramm. Fernerkund. Geoinf. 2012, 2012, 251–266. [Google Scholar] [CrossRef] [PubMed]
  157. Moran, M.S.; Ross, B.B.; Clarke, T.R.; Qi, J. Deployment and calibration of reference reflectance tarps for use with airborne imaging sensors. Photogramm. Eng. Remote Sens. 2001, 67, 273–286. [Google Scholar]
  158. Miura, T.; Huete, A.R. Performance of Three Reflectance Calibration Methods for Airborne Hyperspectral Spectrometer Data. Sensors 2009, 9, 794–813. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  159. Beisl, U.; Telaar, J.; von Schönermark, M. Atmospheric Correction, Reflectance Calibration and BRDF Correction for ADS40 Image Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, XXXVII, 7–12. [Google Scholar]
  160. Sabater, N.; Vicent, J.; Alonso, L.; Cogliati, S.; Verrelst, J.; Moreno, J. Impact of Atmospheric Inversion Effects on Solar-Induced Chlorophyll Fluorescence: Exploitation of the Apparent Reflectance as a Quality Indicator. Remote Sens. 2017, 9, 622. [Google Scholar] [CrossRef]
  161. Aasen, H. Influence of the viewing geometry on hyperspectral data retrieved from UAV snapshot cameras. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-7, 257–261. [Google Scholar] [CrossRef]
  162. Honkavaara, E.; Markelin, L.; Hakala, T.; Peltoniemi, J.I. The Metrology of Directional, Spectral Reflectance Factor Measurements Based on Area Format Imaging by UAVs. Photogramm. Fernerkund. Geoinf. 2014, 2014, 175–188. [Google Scholar] [CrossRef]
  163. Honkavaara, E.; Khoramshahi, E. Radiometric Correction of Close-Range Spectral Image Blocks Captured Using an Unmanned Aerial Vehicle with a Radiometric Block Adjustment. Remote Sens. 2018, 10, 256. [Google Scholar] [CrossRef]
  164. Beisl, U. Correction of Bidirectional Effects in Imaging Spectrometer Data; Remote Sensing Series; Remote Sensing Laboratories, Department of Geography: Zurich, Switzerland, 2001; ISBN 978-3-03703-001-1. [Google Scholar]
  165. Von Schönermark, M.; Geiger, B.; Röser, H.-P. (Eds.) Reflection Properties of Vegetation and Soil: With a BRDF Data Base; 1. Aufl.; Wissenschaft und Technik Verlag: Berlin, Germany, 2004; ISBN 978-3-89685-565-7. [Google Scholar]
  166. Weyermann, J.; Damm, A.; Kneubuhler, M.; Schaepman, M.E. Correction of Reflectance Anisotropy Effects of Vegetation on Airborne Spectroscopy Data and Derived Products. IEEE Trans. Geosci. Remote Sens. 2014, 52, 616–627. [Google Scholar] [CrossRef]
  167. Hueni, A.; Damm, A.; Kneubuehler, M.; Schlapfer, D.; Schaepman, M.E. Field and Airborne Spectroscopy Cross Validation—Some Considerations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 1117–1135. [Google Scholar] [CrossRef]
  168. Walthall, C.L.; Norman, J.M.; Welles, J.M.; Campbell, G.; Blad, B.L. Simple equation to approximate the bidirectional reflectance from vegetative canopies and bare soil surfaces. Appl. Opt. 1985, 24, 383. [Google Scholar] [CrossRef] [PubMed]
  169. Nilson, T.; Kuusk, A. A reflectance model for the homogeneous plant canopy and its inversion. Remote Sens. Environ. 1989, 27, 157–167. [Google Scholar] [CrossRef]
  170. Richter, R.; Kellenberger, T.; Kaufmann, H. Comparison of Topographic Correction Methods. Remote Sens. 2009, 1, 184–196. [Google Scholar] [CrossRef] [Green Version]
  171. Teillet, P.M.; Guindon, B.; Goodenough, D.G. On the Slope-Aspect Correction of Multispectral Scanner Data. Can. J. Remote Sens. 1982, 8, 84–106. [Google Scholar] [CrossRef] [Green Version]
  172. Shepherd, J.D.; Dymond, J.R. Correcting satellite imagery for the variance of reflectance and illumination with topography. Int. J. Remote Sens. 2003, 24, 3503–3514. [Google Scholar] [CrossRef]
  173. Minnaert, M. The reciprocity principle in lunar photometry. Astrophys. J. 1941, 93, 403. [Google Scholar] [CrossRef]
  174. Adeline, K.R.M.; Chen, M.; Briottet, X.; Pang, S.K.; Paparoditis, N. Shadow detection in very high spatial resolution aerial images: A comparative study. ISPRS J. Photogramm. Remote Sens. 2013, 80, 21–38. [Google Scholar] [CrossRef]
  175. Schläpfer, D.; Richter, R.; Kellenberger, T. Atmospheric and Topographic Correction of Photogrammetric Airborne Digital Scanner Data (ATCOR-ADS); EuroSDR—EUROCOW 2012. Barcelona, Spain, 2012. Available online: http://www.geo.uzh.ch/microsite/rsl-documents/research/publications/other-sci-communications/Schlaepfer_eurocow2012_ATCOR-ADS-1512234752/Schlaepfer_eurocow2012_ATCOR-ADS.pdf (accessed on 9 July 2018).
  176. Richter, R.; Müller, A. De-shadowing of satellite/airborne imagery. Int. J. Remote Sens. 2005, 26, 3137–3148. [Google Scholar] [CrossRef]
  177. Chandelier, L.; Martinoty, G. A Radiometric Aerial Triangulation for the Equalization of Digital Aerial Images and Orthoimages. Photogramm. Eng. Remote Sens. 2009, 75, 193–200. [Google Scholar] [CrossRef]
  178. Collings, S.; Caccetta, P.; Campbell, N.; Wu, X. Empirical Models for Radiometric Calibration of Digital Aerial Frame Mosaics. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2573–2588. [Google Scholar] [CrossRef]
  179. Gehrke, S.; Beshah, B.T. RADIOMETRIC NORMALIZATION OF LARGE AIRBORNE IMAGE DATA SETS ACQUIRED BY DIFFERENT SENSOR TYPES. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI–B1, 317–326. [Google Scholar] [CrossRef]
  180. Hernández López, D.; Felipe García, B.; González Piqueras, J.; Alcázar, G.V. An approach to the radiometric aerotriangulation of photogrammetric images. ISPRS J. Photogramm. Remote Sens. 2011, 66, 883–893. [Google Scholar] [CrossRef]
  181. ITRES. Imagers | ITRES. Available online: http://www.itres.com/imagers/ (accessed on 9 March 2018).
  182. Schaepman, M.E.; Jehle, M.; Hueni, A.; D’Odorico, P.; Damm, A.; Weyermann, J.; Schneider, F.D.; Laurent, V.; Popp, C.; Seidel, F.C.; et al. Advanced radiometry measurements and Earth science applications with the Airborne Prism Experiment (APEX). Remote Sens. Environ. 2015, 158, 207–219. [Google Scholar] [CrossRef] [Green Version]
  183. Rascher, U.; Alonso, L.; Burkart, A.; Cilia, C.; Cogliati, S.; Colombo, R.; Damm, A.; Drusch, M.; Guanter, L.; Hanus, J.; et al. Sun-induced fluorescence—A new probe of photosynthesis: First maps from the imaging spectrometer HyPlant. Glob. Change Biol. 2015, 21, 4673–4684. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  184. Green, R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.A.; Pavri, B.E.; Chovit, C.J.; Solis, M.; et al. Imaging Spectroscopy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
  185. Cook, B.; Corp, L.; Nelson, R.; Middleton, E.; Morton, D.; McCorkel, J.; Masek, J.; Ranson, K.; Ly, V.; Montesano, P. NASA Goddard’s LiDAR, Hyperspectral and Thermal (G-LiHT) Airborne Imager. Remote Sens. 2013, 5, 4045–4066. [Google Scholar] [CrossRef] [Green Version]
  186. Specim, Spectral Imaging Ltd. Hyperspectral Imaging System AisaKESTREL. Available online: http://www.specim.fi/products/aisakestrel-hyperspectral-imaging-system/ (accessed on 30 January 2018).
  187. Roosjen, P.; Suomalainen, J.; Bartholomeus, H.; Clevers, J. Hyperspectral Reflectance Anisotropy Measurements Using a Pushbroom Spectrometer on an Unmanned Aerial Vehicle—Results for Barley, Winter Wheat, and Potato. Remote Sens. 2016, 8, 909. [Google Scholar] [CrossRef]
  188. Uto, K.; Seki, H.; Saito, G.; Kosugi, Y.; Komatsu, T. Development of a Low-Cost, Lightweight Hyperspectral Imaging System Based on a Polygon Mirror and Compact Spectrometers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 861–875. [Google Scholar] [CrossRef]
  189. Meng, Z.; Petrov, G.I.; Cheng, S.; Jo, J.A.; Lehmann, K.K.; Yakovlev, V.V.; Scully, M.O. Lightweight Raman spectroscope using time-correlated photon-counting detection. Proc. Natl. Acad. Sci. USA 2015, 112, 12315–12320. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  190. Lelong, C.C.D.; Burger, P.; Jubelin, G.; Roux, B.; Labbé, S.; Baret, F. Assessment of Unmanned Aerial Vehicles Imagery for Quantitative Monitoring of Wheat Crop in Small Plots. Sensors 2008, 8, 3557–3585. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  191. Harwin, S.; Lucieer, A. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef] [Green Version]
  192. Grenzdörffer, G.J. Crop height determination with UAS point clouds. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-1, 135–140. [Google Scholar] [CrossRef]
  193. Eltner, A.; Schneider, D. Analysis of Different Methods for 3D Reconstruction of Natural Surfaces from Parallel-Axes UAV Images. Photogramm. Rec. 2015, 30, 279–299. [Google Scholar] [CrossRef]
  194. Gómez-Gutiérrez, Á.; de Sanjosé-Blasco, J.; de Matías-Bejarano, J.; Berenguer-Sempere, F. Comparing Two Photo-Reconstruction Methods to Produce High Density Point Clouds and DEMs in the Corral del Veleta Rock Glacier (Sierra Nevada, Spain). Remote Sens. 2014, 6, 5407–5427. [Google Scholar] [CrossRef] [Green Version]
  195. Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef]
  196. Puliti, S.; Olerka, H.; Gobakken, T.; Næsset, E. Inventory of Small Forest Areas Using an Unmanned Aerial System. Remote Sens. 2015, 7, 9632–9654. [Google Scholar] [CrossRef] [Green Version]
  197. Van der Wal, T.; Abma, B.; Viguria, A.; Prévinaire, E.; Zarco-Tejada, P.J.; Serruys, P.; van Valkengoed, E.; van der Voet, P. Fieldcopter: Unmanned aerial systems for crop monitoring services. In Precision agriculture ’13; Stafford, J., Ed.; Wageningen Academic Publishers: Wageningen, The Netherlands, 2013; pp. 169–175. [Google Scholar]
  198. Hakala, T.; Suomalainen, J.; Peltoniemi, J.I. Acquisition of Bidirectional Reflectance Factor Dataset Using a Micro Unmanned Aerial Vehicle and a Consumer Camera. Remote Sens. 2010, 2, 819–832. [Google Scholar] [CrossRef] [Green Version]
  199. Anderson, K.; Dungan, J.L.; MacArthur, A. On the reproducibility of field-measured reflectance factors in the context of vegetation studies. Remote Sens. Environ. 2011, 115, 1893–1905. [Google Scholar] [CrossRef]
  200. Anderson, K.; Milton, E.J. On the temporal stability of ground calibration targets: Implications for the reproducibility of remote sensing methodologies. Int. J. Remote Sens. 2006, 27, 3365–3374. [Google Scholar] [CrossRef]
  201. Miyoshi, G.T.; Imai, N.N.; Tommaselli, A.M.G.; Honkavaara, E.; Näsi, R.; Moriya, É.A.S. Radiometric block adjustment of hyperspectral image blocks in the Brazilian environment. Int. J. Remote Sens. 2018, 1–21. [Google Scholar] [CrossRef]
  202. Hakala, T.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Kaivosoja, J.; Pesonen, L.; Pölönen, I. Spectral imaging from UAVs under varying illumination conditions. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013. [Google Scholar] [CrossRef]
  203. Walter, A.; Finger, R.; Huber, R.; Buchmann, N. Opinion: Smart farming is key to developing sustainable agriculture. Proc. Natl. Acad. Sci. USA 2017, 114, 6148–6150. [Google Scholar] [CrossRef] [PubMed]
  204. Hunt, E.R.; Daughtry, C.S.T. What good are unmanned aircraft systems for agricultural remote sensing and precision agriculture? Int. J. Remote Sens. 2017, 1–32. [Google Scholar] [CrossRef]
  205. Araus, J.L.; Cairns, J.E. Field high-throughput phenotyping: The new crop breeding frontier. Trends Plant Sci. 2014, 19, 52–61. [Google Scholar] [CrossRef] [PubMed]
  206. Fiorani, F.; Schurr, U. Future Scenarios for Plant Phenotyping. Annu. Rev. Plant Biol. 2013, 64, 267–291. [Google Scholar] [CrossRef] [PubMed]
  207. Yang, G.; Liu, J.; Zhao, C.; Li, Z.; Huang, Y.; Yu, H.; Xu, B.; Yang, X.; Zhu, D.; Zhang, X.; et al. Unmanned Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives. Front. Plant Sci. 2017, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  208. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  209. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis – Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed]
  210. Aasen, H.; Bareth, G. Ground and UAV sensing approaches for spectral and 3D crop trait estimation. In Hyperspectral Remote Sensing of Vegetation—Volume II: Advanced Approaches and Applications in Crops and Plants; Thenkabail, P., Lyon, J.G., Huete, A., Eds.; Taylor and Francis Inc.: Abingdon, UK, 2018. [Google Scholar]
  211. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  212. Tilly, N.; Aasen, H.; Bareth, G. Fusion of Plant Height and Vegetation Indices for the Estimation of Barley Biomass. Remote Sens. 2015, 7, 11449–11480. [Google Scholar] [CrossRef] [Green Version]
  213. Hueni, A.; Nieke, J.; Schopfer, J.; Kneubühler, M.; Itten, K.I. The spectral database SPECCHIO for improved long-term usability and data sharing. Comput. Geosci. 2009, 35, 557–565. [Google Scholar] [CrossRef] [Green Version]
  214. Duarte, L.; Teodoro, A.C.; Moutinho, O.; Gonçalves, J.A. Open-source GIS application for UAV photogrammetry based on MicMac. Int. J. Remote Sens. 2017, 38, 3181–3202. [Google Scholar] [CrossRef]
  215. Itten, K.I.; Dell’Endice, F.; Hueni, A.; Kneubühler, M.; Schläpfer, D.; Odermatt, D.; Seidel, F.; Huber, S.; Schopfer, J.; Kellenberger, T.; et al. APEX—The Hyperspectral ESA Airborne Prism Experiment. Sensors 2008, 8, 6235–6259. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  216. Roy, D.P.; Borak, J.S.; Devadiga, S.; Wolfe, R.E.; Zheng, M.; Descloitres, J. The MODIS land product quality assessment approach. Remote Sens. Environ. 2002, 83, 62–76. [Google Scholar] [CrossRef]
  217. Bareth, G.; Aasen, H.; Bendig, J.; Gnyp, M.L.; Bolten, A.; Jung, A.; Michels, R.; Soukkamäki, J. Low-weight and UAV-based Hyperspectral Full-frame Cameras for Monitoring Crops: Spectral Comparison with Portable Spectroradiometer Measurements. Photogramm. Fernerkund. Geoinf. 2015, 2015, 69–79. [Google Scholar] [CrossRef]
  218. Domingues Franceschini, M.; Bartholomeus, H.; van Apeldoorn, D.; Suomalainen, J.; Kooistra, L. Intercomparison of Unmanned Aerial Vehicle and Ground-Based Narrow Band Spectrometers Applied to Crop Trait Monitoring in Organic Potato Production. Sensors 2017, 17, 1428. [Google Scholar] [CrossRef] [PubMed]
  219. Von Bueren, S.K.; Burkart, A.; Hueni, A.; Rascher, U.; Tuohy, M.P.; Yule, I.J. Deploying four optical UAV-based sensors over grassland: Challenges and limitations. Biogeosciences 2015, 12, 163–175. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The path of information from a particle (e.g., pigments within the leaf), object, or surface to the data product. The spectral signal is influenced by the environment, the sensor, the measurement protocol, and data processing on the path to its representation as a pixel in a data product. In combination with metadata, this representation becomes information.
Figure 1. The path of information from a particle (e.g., pigments within the leaf), object, or surface to the data product. The spectral signal is influenced by the environment, the sensor, the measurement protocol, and data processing on the path to its representation as a pixel in a data product. In combination with metadata, this representation becomes information.
Remotesensing 10 01091 g001
Figure 2. Example images captured by the sequential Rikola Fabry–Pérot Interferometer (FPI) (left), multi-point spectrometer CUBERT Firefleye (center) and filter-on-chip Imec NIR (right) 2D imagers. The excerpt shows one 5 × 5 tile used to capture the spectral information.
Figure 2. Example images captured by the sequential Rikola Fabry–Pérot Interferometer (FPI) (left), multi-point spectrometer CUBERT Firefleye (center) and filter-on-chip Imec NIR (right) 2D imagers. The excerpt shows one 5 × 5 tile used to capture the spectral information.
Remotesensing 10 01091 g002
Figure 3. TerraLuma pushbroom system: Image (top left) and drawing of the sensor payload (top right) and device interaction flow chart (bottom; CAD design and flow chart: Richard Ballard, TerraLuma group).
Figure 3. TerraLuma pushbroom system: Image (top left) and drawing of the sensor payload (top right) and device interaction flow chart (bottom; CAD design and flow chart: Richard Ballard, TerraLuma group).
Remotesensing 10 01091 g003
Figure 4. TerraLuma 2D imager system (top) with exemplary device interaction flow chart (bottom; source: Richard Ballard, TerraLuma group).
Figure 4. TerraLuma 2D imager system (top) with exemplary device interaction flow chart (bottom; source: Richard Ballard, TerraLuma group).
Remotesensing 10 01091 g004
Figure 5. The full data processing workflow to create a reflectance data product. First, sensor-related calibration procedures are carried out. Relative calibration (RC1) and spectral calibration (SC) transform the digital numbers (DN) of the sensor to normalized DN (DNn). Further, absolute radiometric calibration (RC2) can be carried out to generate at-sensor radiance (Ls). Second, the data is transformed to reflectance factors (R) with the empirical line method (ELM), based on a second radiometrically calibrated reference device on the ground, the UAV, or models. Geometric processing (GP) is an estimation of the relative position and orientation of the measurements, and composes the data into a scene. Radiometric block adjustment can be used at different steps in the process to optimize the radiometry of the scene and correct for bidirectional reflectance distribution function (BRDF) effects. The geometric processing (GP) composes the data into a scene. Additional modules may then transform the reflectance factors in the scene to reflectance quantities (c.f. Section 4.4), and shadows and topography effects may be corrected. Independent radiometric reference targets are used to validate the data. The processing procedures are tracked in metadata to allow an accurate interpretation of the results.
Figure 5. The full data processing workflow to create a reflectance data product. First, sensor-related calibration procedures are carried out. Relative calibration (RC1) and spectral calibration (SC) transform the digital numbers (DN) of the sensor to normalized DN (DNn). Further, absolute radiometric calibration (RC2) can be carried out to generate at-sensor radiance (Ls). Second, the data is transformed to reflectance factors (R) with the empirical line method (ELM), based on a second radiometrically calibrated reference device on the ground, the UAV, or models. Geometric processing (GP) is an estimation of the relative position and orientation of the measurements, and composes the data into a scene. Radiometric block adjustment can be used at different steps in the process to optimize the radiometry of the scene and correct for bidirectional reflectance distribution function (BRDF) effects. The geometric processing (GP) composes the data into a scene. Additional modules may then transform the reflectance factors in the scene to reflectance quantities (c.f. Section 4.4), and shadows and topography effects may be corrected. Independent radiometric reference targets are used to validate the data. The processing procedures are tracked in metadata to allow an accurate interpretation of the results.
Remotesensing 10 01091 g005
Figure 6. Spectral 3D point cloud (left) and 2D orthophoto (right) captured with a spectral 2D imager (Rikola FPI) of a spruce-dominated forest area in Finland. The orthophoto has a ground sampling distance (GSD) of 10 cm, and the point cloud has a 5-cm point interval. The spectral bands are green (520 nm; FWHM: 22 nm), red (598.80 nm, FWHM = 24 nm) and near infrared (763.70 nm, FWHM = 32 nm). The 3D point cloud gives a possibility for the spectral analysis of object properties in multiple height levels in each X and Y coordinate, whereas in the orthophoto, only one value is stored in each X and Y coordinate.
Figure 6. Spectral 3D point cloud (left) and 2D orthophoto (right) captured with a spectral 2D imager (Rikola FPI) of a spruce-dominated forest area in Finland. The orthophoto has a ground sampling distance (GSD) of 10 cm, and the point cloud has a 5-cm point interval. The spectral bands are green (520 nm; FWHM: 22 nm), red (598.80 nm, FWHM = 24 nm) and near infrared (763.70 nm, FWHM = 32 nm). The 3D point cloud gives a possibility for the spectral analysis of object properties in multiple height levels in each X and Y coordinate, whereas in the orthophoto, only one value is stored in each X and Y coordinate.
Remotesensing 10 01091 g006
Table 1. Different spectral sensor types for unmanned aerial vehicle (UAV) sensing systems with properties of example sensors. The exact numbers may vary between different models, and should just be taken as indication. The visualizations indicate the image cube slices recorded during each measurement of a sensor type (adapted from drawings provided by Stefan Livens). The information in the table is composed of information found in the literature, on websites of the manufacturers, and personal correspondence with Stefan Livens from VITO (for the COSI cam), Trond Løke from HySpex (for the Mjolnir), and Robert Parker from Micasense (for the RedEdge-m). FWHM: full width at half maximum.
Table 1. Different spectral sensor types for unmanned aerial vehicle (UAV) sensing systems with properties of example sensors. The exact numbers may vary between different models, and should just be taken as indication. The visualizations indicate the image cube slices recorded during each measurement of a sensor type (adapted from drawings provided by Stefan Livens). The information in the table is composed of information found in the literature, on websites of the manufacturers, and personal correspondence with Stefan Livens from VITO (for the COSI cam), Trond Løke from HySpex (for the Mjolnir), and Robert Parker from Micasense (for the RedEdge-m). FWHM: full width at half maximum.
Data Cube SliceScanning DimensionSpatial ResolutionSpectral Bands **Spectral Resolution (FWHM)Bit DepthExample Sensors
Point Remotesensing 10 01091 i001spatialnone++++ (1024)
+++++ (3648)
++++ (1–12 nm)
+++++ (0.1–10 nm)
12 bit
16 bit
Ocean Optics STS
Ocean Optics USB4000
Pushbroom Remotesensing 10 01091 i002spatialVNIR+++ (1240)++++ (200)+++ (3.2–6.4 nm)12 bitHySpex Mjolnir V
Specim AisaKESTREL 10, Headwall micro-hyperspec/nano-hyperspec, Bayspec OCI, Resonon Pika
SWIR+++ (620)++++ (300)+++ (~6 nm)16 bitHySpex Mjolnir S (970–2500 nm)
Specim AisaKESTREL 16
(600–1640 nm)
2D imagerMulti-camera Remotesensing 10 01091 i003spatial+++ (1280 × 960)+ (5)+ (10–40 nm)12 bitMicasense RedEdge-m
Parrot Sequoia, Tetracam Mini MCA, macaw
Sequential (multi-) band Remotesensing 10 01091 i004spectralVNIR+++ (1000 × 1000) +++ (100) +++ (5–12 nm) 12 bitRikola FPI VNIR
SWIR++ (320 × 256)++ (30)++ (20–30 nm) Prototype FPI SWIR
snapshotMulti-point Remotesensing 10 01091 i005none+ (50 × 50)+++ (125)+++ (5–25 nm)12 bitCubert FireFleye
Filter-on-chip Remotesensing 10 01091 i006noneVIS++ (512 × 272)++ (16)++ (5–10 nm)10 bitimec SNm4x4 *
NIR++ (409 × 216)++ (25)++ (5–10 nm)10 bitimec SNm5x5 *
Characterized (modified) RGB Remotesensing 10 01091 i007none++++ 3000 × 4000+ (3)+ (50–100 nm)12 bitcanon s110 (NIR)
Spatiospectral Remotesensing 10 01091 i008spatiospectral++++ 2000++ (160)++ (5–10 nm)8 bitCOSI cam
Cubert ButterflEYE LS
* Sold by different companies, e.g., Cubert Butterfly VIS/NIR, ximea MQ022HG-IM-SM4X4-VIS/MQ022HG-IM-SM5X5-NIR, photonFocus MV1-D2048x1088-HS03-96-G2/MV1-D2048x1088-HS02-96-G2. ** number of spectral bands depends on the configuration and binning of the devices and should just be used as a reference.
Table 2. Key publications on novel sensors, concepts, or methods for calibration (C), integration (I), or data pre-processing (P) for UAV spectral sensors and data. RGB: red–green–blue.
Table 2. Key publications on novel sensors, concepts, or methods for calibration (C), integration (I), or data pre-processing (P) for UAV spectral sensors and data. RGB: red–green–blue.
YearDescription (Novelties)Sensor TypeSensorContent Reference
2008Calibration and application of spectroradiometrically characterized RGB camerasMultispectral 2D imagerCanon EOS 350D
Sony DSC-F828
C, P[190]
2009Multi-camera multispectral 2D imager on UAV for vegetation monitoringMulti-camera spectral 2D imagerMiniMCAC, P[130]
2012Small hyperspectral pushbroom UAVs system for vegetation monitoringPushbroomMicro-Hyperspec VNIRC, P[125]
2012Characterization and calibration of spectral 2D imagerMulti-camera spectral 2D imagerMiniMCAC[50]
2013Processing chain for sequential band spectral 2D imager for spectral and 3D dataSequential band spectral 2D imagerRikola FPIP[44]
2014Point spectrometer on UAV
Wireless communication to ground spectrometer for irradiance measurements
Point spectrometerSTS-VISC, I[22]
2014Self-assembled pushbroom system
Orientation of image lines with a combination of GNSS/INS and aerial images
PushbroomSelf-assembledC, I, P[40,126]
2014First pushbroom system on multi-rotor UAV for ultra-high resolution imaging spectroscopy
Comprehensive description of calibration procedures
PushbroomMicro-Hyperspec VNIRC, I[38]
2014Uncertainty propagation of the hemispherical directional reflectance observations in the radiometric processing chain Sequential band spectral 2D imagerRikola FPIC, P[162]
2015Hyperspectral 3D models
Quality assurance information integration
2D snapshot 2D imagerCubert FirefleyeC, I, P[43]
2015Multi-angular measurements with UAVPoint spectrometerOceanOptics STS-VISC, I, P[24]
2016Multi-angular measurements with UAVPushbroomSelf-build (HYMSY)P[187]
2016SWIR 2D imaging from UAVSequential band 2D imagerTunable FPI SWIRP[65]
2016Implementation and calibration of multi-camera system on UAVMulti-camera spectral 2D imagerSelf-assembledC, I[58]
2017Measuring sun-induced fluorescence in the O2A band
Comprehensive description of calibration procedures
Point spectrometerOceanOptics USB4000C, I, P[25]
2017Toolbox for pre-processing drone-borne hyperspectral DataSequential spectral 2D imagerRikola VNIRC, P[68]
2017BRDF measurements with UAVSequential band spectral 2D imagerRikola FPIP[70]
2018Theoretical considerations to comprehend imaging spectroscopy with 2D imagers
Explanation of differences between imaging and non-imaging data
2D imagers in generalCubert FirefleyeC, P[42]
Table 3. Overview of the suitability (- not suited to ++ very well-suited) of georeferencing techniques for different types of hyperspectral UAV sensors. GCPs: ground control points, GNSS: global navigation satellite system; IMU: inertial measurement unit.
Table 3. Overview of the suitability (- not suited to ++ very well-suited) of georeferencing techniques for different types of hyperspectral UAV sensors. GCPs: ground control points, GNSS: global navigation satellite system; IMU: inertial measurement unit.
GCPson-Board GNSS/IMUSfM + GCPs and/or GNSS/IMUCo-Registration
Point spectroradiometer-++++-
Pushbroom+/-++-+
2D imager+++++
Table 4. Overview of the top-of-canopy generation procedures and their applicability to different atmospheric (e.g., cloudiness) and irradiance (e.g., different intensities due to diurnal sun angle change) conditions. Additionally, the applicability to point (P), pushbroom (PP), and 2D imagers (2D) is indicated. +: suitable; -: not suitable.
Table 4. Overview of the top-of-canopy generation procedures and their applicability to different atmospheric (e.g., cloudiness) and irradiance (e.g., different intensities due to diurnal sun angle change) conditions. Additionally, the applicability to point (P), pushbroom (PP), and 2D imagers (2D) is indicated. +: suitable; -: not suitable.
Radiometric Data Calibration MethodStable AtmosphereStable AtmosphereUnstable AtmosphereApplicable to
Stable IrradianceUnstable IrradianceUnstable Irradiance
empirical line method+--(P), PP, 2D
radiometric block adjustment+++(PP *), 2D
stationary radiometric tracking++-P, PP, 2D
on-board radiometric tracking+++P, PP, 2D
radiative transfer modeling+--P, PP, 2D
* for radiometric block adjustment, overlapping data is needed. Thus, it also works on multiple scenes (flight strips) of pushbroom systems.
Table 5. Numeric (n) or qualitatively (q) mandatory (m), and advised (a)auxiliary and metadata for spectral data processing. Although the direct and diffuse illumination ratio is important, it is set to advised, since it is not easy to measure.
Table 5. Numeric (n) or qualitatively (q) mandatory (m), and advised (a)auxiliary and metadata for spectral data processing. Although the direct and diffuse illumination ratio is important, it is set to advised, since it is not easy to measure.
PixelImageScene
signal-to-noise ratio (n, m)
radiometric resolution (n, m)
viewing geometry (n, m)
capturing position (n, m)
illumination  (q, m)
conditions
direct and diffuse (n, a)
illumination ratio
capturing time  (n, m)
sensor description   (q, m)
(including version)
band configuration   (n, m)
(FWHM, band center)
geometric processing   (q, m)
procedures and accuracies
(including software version and parameters)
top-of-canopy    (q, m)
reflectance calculation method
reflectance uncertainty   (n, a)
environmental    (q, m)
conditions during measurement

Share and Cite

MDPI and ACS Style

Aasen, H.; Honkavaara, E.; Lucieer, A.; Zarco-Tejada, P.J. Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows. Remote Sens. 2018, 10, 1091. https://doi.org/10.3390/rs10071091

AMA Style

Aasen H, Honkavaara E, Lucieer A, Zarco-Tejada PJ. Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows. Remote Sensing. 2018; 10(7):1091. https://doi.org/10.3390/rs10071091

Chicago/Turabian Style

Aasen, Helge, Eija Honkavaara, Arko Lucieer, and Pablo J. Zarco-Tejada. 2018. "Quantitative Remote Sensing at Ultra-High Resolution with UAV Spectroscopy: A Review of Sensor Technology, Measurement Procedures, and Data Correction Workflows" Remote Sensing 10, no. 7: 1091. https://doi.org/10.3390/rs10071091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop