Next Issue
Volume 4, September
Previous Issue
Volume 4, March

Table of Contents

Drones, Volume 4, Issue 2 (June 2020) – 19 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) Knowledge of temperature variation across animal habitats may help to predict demographic effects [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Vegetation Extraction Using Visible-Bands from Openly Licensed Unmanned Aerial Vehicle Imagery
Drones 2020, 4(2), 27; https://doi.org/10.3390/drones4020027 - 26 Jun 2020
Viewed by 1230
Abstract
Red–green–blue (RGB) cameras which are attached in commercial unmanned aerial vehicles (UAVs) can support remote-observation small-scale campaigns, by mapping, within a few centimeter’s accuracy, an area of interest. Vegetated areas need to be identified either for masking purposes (e.g., to exclude vegetated areas [...] Read more.
Red–green–blue (RGB) cameras which are attached in commercial unmanned aerial vehicles (UAVs) can support remote-observation small-scale campaigns, by mapping, within a few centimeter’s accuracy, an area of interest. Vegetated areas need to be identified either for masking purposes (e.g., to exclude vegetated areas for the production of a digital elevation model (DEM) or for monitoring vegetation anomalies, especially for precision agriculture applications. However, while detection of vegetated areas is of great importance for several UAV remote sensing applications, this type of processing can be quite challenging. Usually, healthy vegetation can be extracted at the near-infrared part of the spectrum (approximately between 760–900 nm), which is not captured by the visible (RGB) cameras. In this study, we explore several visible (RGB) vegetation indices in different environments using various UAV sensors and cameras to validate their performance. For this purposes, openly licensed unmanned aerial vehicle (UAV) imagery has been downloaded “as is” and analyzed. The overall results are presented in the study. As it was found, the green leaf index (GLI) was able to provide the optimum results for all case studies. Full article
(This article belongs to the collection Feature Papers of Drones)
Show Figures

Figure 1

Open AccessArticle
Low-Altitude Terrain-Following Flight Planning for Multirotors
Drones 2020, 4(2), 26; https://doi.org/10.3390/drones4020026 - 25 Jun 2020
Viewed by 456
Abstract
Surveying with unmanned aerial vehicles flying close to the terrain is crucial for the collection of details that are not visible when flying at higher altitudes. This type of missions can be applied in several scenarios such as search and rescue, precision agriculture, [...] Read more.
Surveying with unmanned aerial vehicles flying close to the terrain is crucial for the collection of details that are not visible when flying at higher altitudes. This type of missions can be applied in several scenarios such as search and rescue, precision agriculture, and environmental monitoring, to name a few. We present a strategy for the generation of low-altitude trajectories for terrain following. The trajectory is generated taking into account the morphology of the area of interest, represented as a georeferenced Digital Surface Model (DSM), while ensuring a safe separation from any obstacle. The surface model of the scenario is created by using a UAV-based photogrammetry software, which processes the images acquired during a preliminary mission at high altitude. The solution was developed, tested, and verified both in simulation and in real scenarios with a multirotor equipped with low-cost sensing. The experimental results proved the validity of the generation of trajectories at altitudes lower than most of the works available in the literature. The images acquired during the low-altitude mission were processed to obtain a high-resolution reconstruction of the area as a representative application result. Full article
Show Figures

Figure 1

Open AccessArticle
Using Multispectral Drone Imagery for Spatially Explicit Modeling of Wave Attenuation through a Salt Marsh Meadow
Drones 2020, 4(2), 25; https://doi.org/10.3390/drones4020025 - 24 Jun 2020
Viewed by 335
Abstract
Offering remarkable biodiversity, coastal salt marshes also provide a wide variety of ecosystem services: cultural services (leisure, tourist amenities), supply services (crop production, pastoralism) and regulation services including carbon sequestration and natural protection against coastal erosion and inundation. The consideration of this coastal [...] Read more.
Offering remarkable biodiversity, coastal salt marshes also provide a wide variety of ecosystem services: cultural services (leisure, tourist amenities), supply services (crop production, pastoralism) and regulation services including carbon sequestration and natural protection against coastal erosion and inundation. The consideration of this coastal protection ecosystem service takes part in a renewed vision of coastal risk management and especially marine flooding, with an emerging focus on “nature-based solutions.” Through this work, using remote-sensing methods, we propose a novel drone-based spatial modeling methodology of the salt marsh hydrodynamic attenuation at very high spatial resolution (VHSR). This indirect modeling is based on in situ measurements of significant wave heights (Hm0) that constitute the ground truth, as well as spectral and topographical predictors from VHSR multispectral drone imagery. By using simple and multiple linear regressions, we identify the contribution of predictors, taken individually, and jointly. The best individual drone-based predictor is the green waveband. Dealing with the addition of individual predictors to the red-green-blue (RGB) model, the highest gain is observed with the red edge waveband, followed by the near-infrared, then the digital surface model. The best full combination is the RGB enhanced by the red edge and the normalized difference vegetation index (coefficient of determination (R2): 0.85, root mean square error (RMSE): 0.20%/m). Full article
Show Figures

Graphical abstract

Open AccessArticle
Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment
Drones 2020, 4(2), 24; https://doi.org/10.3390/drones4020024 - 22 Jun 2020
Viewed by 449
Abstract
Efficient and rapid data collection techniques are necessary to obtain transitory information in the aftermath of natural hazards, which is not only useful for post-event management and planning, but also for post-event structural damage assessment. Aerial imaging from unpiloted (gender-neutral, but also known [...] Read more.
Efficient and rapid data collection techniques are necessary to obtain transitory information in the aftermath of natural hazards, which is not only useful for post-event management and planning, but also for post-event structural damage assessment. Aerial imaging from unpiloted (gender-neutral, but also known as unmanned) aerial systems (UASs) or drones permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest. However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds. Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures. This manuscript aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification. In detail, 2D convolutional neural networks (2D CNN) are developed based on transfer learning from two well-known networks AlexNet and VGGNet. In contrast, a 3D fully convolutional network (3DFCN) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes. This demonstrates the value and importance of 3D datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures. Full article
(This article belongs to the collection Feature Papers of Drones)
Show Figures

Figure 1

Open AccessLetter
Measuring Wind Speed Using the Internal Stabilization System of a Quadrotor Drone
Drones 2020, 4(2), 23; https://doi.org/10.3390/drones4020023 - 16 Jun 2020
Viewed by 433
Abstract
This article proposes a method of measuring wind speed using the data logged by the autopilot of a quadrotor drone. Theoretical equations from works on quadrotor control are utilized and supplemented to form the theoretical framework. Static thrust tests provide the necessary parameters [...] Read more.
This article proposes a method of measuring wind speed using the data logged by the autopilot of a quadrotor drone. Theoretical equations from works on quadrotor control are utilized and supplemented to form the theoretical framework. Static thrust tests provide the necessary parameters for calculating wind estimates. Flight tests were conducted at a test site with laminar wind conditions with the quadrotor hovering next to a static 2D ultrasonic anemometer with wind speeds between 0–5 m/s. Horizontal wind estimates achieve exceptionally good results with root mean square error (RMSE) values between 0.26–0.29 m/s for wind speed, as well as between 4.1–4.9 for wind direction. The flexibility of this new method simplifies the process, decreases the cost, and adds new application areas for wind measurements. Full article
Show Figures

Figure 1

Open AccessLetter
5G-Enabled Security Scenarios for Unmanned Aircraft: Experimentation in Urban Environment
Drones 2020, 4(2), 22; https://doi.org/10.3390/drones4020022 - 12 Jun 2020
Viewed by 366
Abstract
The telecommunication industry has seen rapid growth in the last few decades. This trend has been fostered by the diffusion of wireless communication technologies. In the city of Matera, Italy (European capital of culture 2019), two applications of 5G for public security have [...] Read more.
The telecommunication industry has seen rapid growth in the last few decades. This trend has been fostered by the diffusion of wireless communication technologies. In the city of Matera, Italy (European capital of culture 2019), two applications of 5G for public security have been tested by using an aerial drone: the recognition of objects and people in a crowded city and the detection of radio-frequency jammers. This article describes the experiments and the results obtained. Full article
Show Figures

Figure 1

Open AccessArticle
Comparison of Machine Learning Algorithms for Wildland-Urban Interface Fuelbreak Planning Integrating ALS and UAV-Borne LiDAR Data and Multispectral Images
Drones 2020, 4(2), 21; https://doi.org/10.3390/drones4020021 - 11 Jun 2020
Viewed by 474
Abstract
Controlling vegetation fuels around human settlements is a crucial strategy for reducing fire severity in forests, buildings and infrastructure, as well as protecting human lives. Each country has its own regulations in this respect, but they all have in common that by reducing [...] Read more.
Controlling vegetation fuels around human settlements is a crucial strategy for reducing fire severity in forests, buildings and infrastructure, as well as protecting human lives. Each country has its own regulations in this respect, but they all have in common that by reducing fuel load, we in turn reduce the intensity and severity of the fire. The use of Unmanned Aerial Vehicles (UAV)-acquired data combined with other passive and active remote sensing data has the greatest performance to planning Wildland-Urban Interface (WUI) fuelbreak through machine learning algorithms. Nine remote sensing data sources (active and passive) and four supervised classification algorithms (Random Forest, Linear and Radial Support Vector Machine and Artificial Neural Networks) were tested to classify five fuel-area types. We used very high-density Light Detection and Ranging (LiDAR) data acquired by UAV (154 returns·m−2 and ortho-mosaic of 5-cm pixel), multispectral data from the satellites Pleiades-1B and Sentinel-2, and low-density LiDAR data acquired by Airborne Laser Scanning (ALS) (0.5 returns·m−2, ortho-mosaic of 25 cm pixels). Through the Variable Selection Using Random Forest (VSURF) procedure, a pre-selection of final variables was carried out to train the model. The four algorithms were compared, and it was concluded that the differences among them in overall accuracy (OA) on training datasets were negligible. Although the highest accuracy in the training step was obtained in SVML (OA=94.46%) and in testing in ANN (OA=91.91%), Random Forest was considered to be the most reliable algorithm, since it produced more consistent predictions due to the smaller differences between training and testing performance. Using a combination of Sentinel-2 and the two LiDAR data (UAV and ALS), Random Forest obtained an OA of 90.66% in training and of 91.80% in testing datasets. The differences in accuracy between the data sources used are much greater than between algorithms. LiDAR growth metrics calculated using point clouds in different dates and multispectral information from different seasons of the year are the most important variables in the classification. Our results support the essential role of UAVs in fuelbreak planning and management and thus, in the prevention of forest fires. Full article
Show Figures

Figure 1

Open AccessArticle
Evaluating the Efficacy and Optimal Deployment of Thermal Infrared and True-Colour Imaging When Using Drones for Monitoring Kangaroos
Drones 2020, 4(2), 20; https://doi.org/10.3390/drones4020020 - 27 May 2020
Viewed by 784
Abstract
Advances in drone technology have given rise to much interest in the use of drone-mounted thermal imagery in wildlife monitoring. This research tested the feasibility of monitoring large mammals in an urban environment and investigated the influence of drone flight parameters and environmental [...] Read more.
Advances in drone technology have given rise to much interest in the use of drone-mounted thermal imagery in wildlife monitoring. This research tested the feasibility of monitoring large mammals in an urban environment and investigated the influence of drone flight parameters and environmental conditions on their successful detection using thermal infrared (TIR) and true-colour (RGB) imagery. We conducted 18 drone flights at different altitudes on the Sunshine Coast, Queensland, Australia. Eastern grey kangaroos (Macropus giganteus) were detected from TIR (n=39) and RGB orthomosaics (n=33) using manual image interpretation. Factors that predicted the detection of kangaroos from drone images were identified using unbiased recursive partitioning. Drone-mounted imagery achieved an overall 73.2% detection success rate using TIR imagery and 67.2% using RGB imagery when compared to on-ground counts of kangaroos. We showed that the successful detection of kangaroos using TIR images was influenced by vegetation type, whereas detection using RGB images was influenced by vegetation type, time of day that the drone was deployed, and weather conditions. Kangaroo detection was highest in grasslands, and kangaroos were not successfully detected in shrublands. Drone-mounted TIR and RGB imagery are effective at detecting large mammals in urban and peri-urban environments. Full article
(This article belongs to the Special Issue She Maps)
Show Figures

Graphical abstract

Open AccessArticle
Estimating Tree Height and Volume Using Unmanned Aerial Vehicle Photography and SfM Technology, with Verification of Result Accuracy
Drones 2020, 4(2), 19; https://doi.org/10.3390/drones4020019 - 11 May 2020
Cited by 1 | Viewed by 844
Abstract
This study aimed to investigate the effects of differences in shooting and flight conditions for an unmanned aerial vehicle (UAV) on the processing method and estimated results of aerial images. Forest images were acquired under 80 different conditions, combining various aerial photography methods [...] Read more.
This study aimed to investigate the effects of differences in shooting and flight conditions for an unmanned aerial vehicle (UAV) on the processing method and estimated results of aerial images. Forest images were acquired under 80 different conditions, combining various aerial photography methods and flight conditions. We verified errors in values measured by the UAV and the measurement accuracy with respect to tree height and volume. Our results showed that aerial images could be processed under all the studied flight conditions. However, although tree height and crown were decipherable in the created 3D model in 64 conditions, they were undecipherable in 16. The standard deviation (SD) in crown area values for each target tree was 0.08 to 0.68 m2. UAV measurements of tree height tended to be lower than the actual values, and the RMSE (root mean square error) was high (5.2 to 7.1 m) through all the 64 modeled conditions. With the estimated volume being lower than the actual volume, the RMSE volume measurements for each flight condition were from 0.31 to 0.4 m3. Therefore, irrespective of flight conditions for UAV measurements, accuracy was low with respect to the actual values. Full article
Show Figures

Graphical abstract

Open AccessArticle
Sharkeye: Real-Time Autonomous Personal Shark Alerting via Aerial Surveillance
Drones 2020, 4(2), 18; https://doi.org/10.3390/drones4020018 - 04 May 2020
Viewed by 781
Abstract
While aerial shark spotting has been a standard practice for beach safety for decades, new technologies offer enhanced opportunities, ranging from drones/unmanned aerial vehicles (UAVs) that provide new viewing capabilities, to new apps that provide beachgoers with up-to-date risk analysis before entering the [...] Read more.
While aerial shark spotting has been a standard practice for beach safety for decades, new technologies offer enhanced opportunities, ranging from drones/unmanned aerial vehicles (UAVs) that provide new viewing capabilities, to new apps that provide beachgoers with up-to-date risk analysis before entering the water. This report describes the Sharkeye platform, a first-of-its-kind project to demonstrate personal shark alerting for beachgoers in the water and on land, leveraging innovative UAV image collection, cloud-hosted machine learning detection algorithms, and reporting via smart wearables. To execute, our team developed a novel detection algorithm trained via machine learning based on aerial footage of real sharks and rays collected at local beaches, hosted and deployed the algorithm in the cloud, and integrated push alerts to beachgoers in the water via a shark app to run on smartwatches. The project was successfully trialed in the field in Kiama, Australia, with over 350 detection events recorded, followed by the alerting of multiple smartwatches simultaneously both on land and in the water, and with analysis capable of detecting shark analogues, rays, and surfers in average beach conditions, and all based on ~1 h of training data in total. Additional demonstrations showed potential of the system to enable lifeguard-swimmer communication, and the ability to create a network on demand to enable the platform. Our system was developed to provide swimmers and surfers with immediate information via smart apps, empowering lifeguards/lifesavers and beachgoers to prevent unwanted encounters with wildlife before it happens. Full article
(This article belongs to the Special Issue Drone Technology for Wildlife and Human Management)
Show Figures

Graphical abstract

Open AccessArticle
Development of a Simplified Radiometric Calibration Framework for Water-Based and Rapid Deployment Unmanned Aerial System (UAS) Operations
Drones 2020, 4(2), 17; https://doi.org/10.3390/drones4020017 - 02 May 2020
Viewed by 533
Abstract
The current study sets out to develop an empirical line method (ELM) radiometric calibration framework for the reduction of atmospheric contributions in unmanned aerial systems (UAS) imagery and for the production of scaled remote sensing reflectance imagery. Using a MicaSense RedEdge camera flown [...] Read more.
The current study sets out to develop an empirical line method (ELM) radiometric calibration framework for the reduction of atmospheric contributions in unmanned aerial systems (UAS) imagery and for the production of scaled remote sensing reflectance imagery. Using a MicaSense RedEdge camera flown on a custom-built octocopter, the research reported herein finds that atmospheric contributions have an important impact on UAS imagery. Data collected over the Lower Pearl River Estuary in Mississippi during five week-long missions covering a wide range of environmental conditions were used to develop and test an ELM radiometric calibration framework designed for the reduction of atmospheric contributions from UAS imagery in studies with limited site accessibility or data acquisition time constraints. The ELM radiometric calibration framework was developed specifically for water-based operations and the efficacy of using generalized study area calibration equations averaged across variable illumination and atmospheric conditions was assessed. The framework was effective in reducing atmospheric and other external contributions in UAS imagery. Unique to the proposed radiometric calibration framework is the radiance-to-reflectance conversion conducted externally from the calibration equations which allows for the normalization of illumination independent from the time of UAS image acquisition and from the time of calibration equation development. While image-by-image calibrations are still preferred for high accuracy applications, this paper provides an ELM radiometric calibration framework that can be used as a time-effective calibration technique to reduce errors in UAS imagery in situations with limited site accessibility or data acquisition constraints. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems for Geosciences)
Show Figures

Figure 1

Open AccessArticle
Reliable Long-Range Multi-Link Communication for Unmanned Search and Rescue Aircraft Systems in Beyond Visual Line of Sight Operation
Drones 2020, 4(2), 16; https://doi.org/10.3390/drones4020016 - 01 May 2020
Viewed by 567
Abstract
With the increasing availability of unmanned aircraft systems, their usage for search and rescue is close at hand. Especially in the maritime context, aerial support can yield significant benefits. This article proposes and evaluates the concept of combining multiple cellular networks for highly [...] Read more.
With the increasing availability of unmanned aircraft systems, their usage for search and rescue is close at hand. Especially in the maritime context, aerial support can yield significant benefits. This article proposes and evaluates the concept of combining multiple cellular networks for highly reliable communication with those aircraft systems. The proposed approach is experimentally validated in several unprecedented large-scale experiments in the maritime context. It is found that in this scenario, conventional methods do not suffice for reliable connectivity to the aircraft with significantly varying overall availabilities between 68% and 97%. The underlying work, however, overcomes the limitations of single-link connectivity by providing availability of up to 99.8% in the analyzed scenarios. Therefore, the approach and the experimental data presented in this work yield a solid contribution to search and rescue drones. All results and flight recording data sets are published along with this article to enable future related work and studies, external reproduction, and validation of the underlying results and findings. Full article
Show Figures

Figure 1

Open AccessCommunication
Drones as an Integral Part of Remote Sensing Technologies to Help Missing People
Drones 2020, 4(2), 15; https://doi.org/10.3390/drones4020015 - 27 Apr 2020
Cited by 1 | Viewed by 883
Abstract
Due to the versatility of the drone, it can be applied in various areas and for different uses and as a practical support for human activities. In particular, this paper focuses on the situation in Italy and how the authorities use drones for [...] Read more.
Due to the versatility of the drone, it can be applied in various areas and for different uses and as a practical support for human activities. In particular, this paper focuses on the situation in Italy and how the authorities use drones for the search and rescue of missing persons, especially now that a 10-year plague that has afflicted Italy with a large number of such incidents annually. Knowledge of the current legislation, the implementation of the drone with other instruments, specific pilot training, and experiential contributions are all essential elements that can provide exceptional assistance in search and rescue operations. However, to guarantee maximum effectiveness of the rescue device, they should seriously consider including teams with proven expertise in operating drones and count on their valuable contribution. Besides drones’ capacity to search large areas, thereby reducing the use of human resources and possibly limiting intervention times, to operate in difficult terrain and/or dangerous conditions for rescue teams, remote sensing tools (such as GPR or ground penetrating radar) as well as other disciplines (such as forensic archeology and, more generally, forensic geosciences) can be implemented to carry out search and rescue missions in case of missing persons. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
An Annular Wing VTOL UAV: Flight Dynamics and Control
Drones 2020, 4(2), 14; https://doi.org/10.3390/drones4020014 - 26 Apr 2020
Cited by 1 | Viewed by 890
Abstract
A vertical takeoff and landing, unmanned aerial vehicle is presented that features a quadrotor design for propulsion and attitude stabilization, and an annular wing that provides lift in forward flight. The annular wing enhances human safety by enshrouding the propeller blades. Both the [...] Read more.
A vertical takeoff and landing, unmanned aerial vehicle is presented that features a quadrotor design for propulsion and attitude stabilization, and an annular wing that provides lift in forward flight. The annular wing enhances human safety by enshrouding the propeller blades. Both the annular wing and the propulsion units are fully characterized in forward flight via wind tunnel experiments. An autonomous control system is synthesized that is based on model inversion, and accounts for the aerodynamics of the wing. It also accounts for the dominant aerodynamics of the propellers in forward flight, specifically the thrust and rotor torques when subject to oblique flow conditions. The attitude controller employed is tilt-prioritized, as the aerodynamics are invariant to the twist angle of the vehicle. Outdoor experiments are performed, resulting in accurate tracking of the reference position trajectories at high speeds. Full article
Show Figures

Figure 1

Open AccessArticle
Accuracy of 3D Landscape Reconstruction without Ground Control Points Using Different UAS Platforms
Drones 2020, 4(2), 13; https://doi.org/10.3390/drones4020013 - 24 Apr 2020
Cited by 1 | Viewed by 1045
Abstract
The rapid increase of low-cost consumer-grade to enterprise-level unmanned aerial systems (UASs) has resulted in the exponential use of these systems in many applications. Structure from motion with multiview stereo (SfM-MVS) photogrammetry is now the baseline for the development of orthoimages and 3D [...] Read more.
The rapid increase of low-cost consumer-grade to enterprise-level unmanned aerial systems (UASs) has resulted in the exponential use of these systems in many applications. Structure from motion with multiview stereo (SfM-MVS) photogrammetry is now the baseline for the development of orthoimages and 3D surfaces (e.g., digital elevation models). The horizontal and vertical positional accuracies (x, y and z) of these products in general, rely heavily on the use of ground control points (GCPs). However, for many applications, the use of GCPs is not possible. Here we tested 14 UASs to assess the positional and within-model accuracy of SfM-MVS reconstructions of low-relief landscapes without GCPs ranging from consumer to enterprise-grade vertical takeoff and landing (VTOL) platforms. We found that high positional accuracy is not necessarily related to the platform cost or grade, rather the most important aspect is the use of post-processing kinetic (PPK) or real-time kinetic (RTK) solutions for geotagging the photographs. SfM-MVS products generated from UAS with onboard geotagging, regardless of grade, results in greater positional accuracies and lower within-model errors. We conclude that where repeatability and adherence to a high level of accuracy are needed, only RTK and PPK systems should be used without GCPs. Full article
(This article belongs to the Special Issue She Maps)
Show Figures

Graphical abstract

Open AccessArticle
Thermal Imaging of Beach-Nesting Bird Habitat with Unmanned Aerial Vehicles: Considerations for Reducing Disturbance and Enhanced Image Accuracy
Drones 2020, 4(2), 12; https://doi.org/10.3390/drones4020012 - 24 Apr 2020
Viewed by 1225
Abstract
Knowledge of temperature variation within and across beach-nesting bird habitat, and how such variation may affect the nesting success and survival of these species, is currently lacking. This type of data is furthermore needed to refine predictions of population changes due to climate [...] Read more.
Knowledge of temperature variation within and across beach-nesting bird habitat, and how such variation may affect the nesting success and survival of these species, is currently lacking. This type of data is furthermore needed to refine predictions of population changes due to climate change, identify important breeding habitat, and guide habitat restoration efforts. Thermal imagery collected with unmanned aerial vehicles (UAVs) provides a potential approach to fill current knowledge gaps and accomplish these goals. Our research outlines a novel methodology for collecting and implementing active thermal ground control points (GCPs) and assess the accuracy of the resulting imagery using an off-the-shelf commercial fixed-wing UAV that allows for the reconstruction of thermal landscapes at high spatial, temporal, and radiometric resolutions. Additionally, we observed and documented the behavioral responses of beach-nesting birds to UAV flights and modifications made to flight plans or the physical appearance of the UAV to minimize disturbance. We found strong evidence that flying on cloudless days and using sky-blue camouflage greatly reduced disturbance to nesting birds. The incorporation of the novel active thermal GCPs into the processing workflow increased image spatial accuracy an average of 12 m horizontally (mean root mean square error of checkpoints in imagery with and without GCPs was 0.59 m and 23.75 m, respectively). The final thermal indices generated had a ground sampling distance of 25.10 cm and a thermal accuracy of less than 1 °C. This practical approach to collecting highly accurate thermal data for beach-nesting bird habitat while avoiding disturbance is a crucial step towards the continued monitoring and modeling of beach-nesting birds and their habitat. Full article
(This article belongs to the Special Issue She Maps)
Show Figures

Figure 1

Open AccessArticle
Monitoring Selective Logging in a Pine-Dominated Forest in Central Germany with Repeated Drone Flights Utilizing A Low Cost RTK Quadcopter
Drones 2020, 4(2), 11; https://doi.org/10.3390/drones4020011 - 09 Apr 2020
Viewed by 750
Abstract
There is no doubt that unmanned aerial systems (UAS) will play an increasing role in Earth observation in the near future. The field of application is very broad and includes aspects of environmental monitoring, security, humanitarian aid, or engineering. In particular, drones with [...] Read more.
There is no doubt that unmanned aerial systems (UAS) will play an increasing role in Earth observation in the near future. The field of application is very broad and includes aspects of environmental monitoring, security, humanitarian aid, or engineering. In particular, drones with camera systems are already widely used. The capability to compute ultra-high-resolution orthomosaics and three-dimensional (3D) point clouds from UAS imagery generates a wide interest in such systems, not only in the science community, but also in industry and agencies. In particular, forestry sciences benefit from ultra-high-structural and spectral information as regular tree level-based monitoring becomes feasible. There is a great need for this kind of information as, for example, due to the spring and summer droughts in Europe in the years 2018/2019, large quantities of individual trees were damaged or even died. This study focuses on selective logging at the level of individual trees using repeated drone flights. Using the new generation of UAS, which allows for sub-decimeter-level positioning accuracies, a change detection approach based on bi-temporal UAS acquisitions was implemented. In comparison to conventional UAS, the effort of implementing repeated drone flights in the field was low, because no ground control points needed to be surveyed. As shown in this study, the geometrical offset between the two collected datasets was below 10 cm across the site, which enabled a direct comparison of both datasets without the need for post-processing (e.g., image matching). For the detection of logged trees, we utilized the spectral and height differences between both acquisitions. For their delineation, an object-based approach was employed, which was proven to be highly accurate (precision = 97.5%; recall = 91.6%). Due to the ease of use of such new generation, off-the-shelf consumer drones, their decreasing purchase costs, the quality of available workflows for data processing, and the convincing results presented here, UAS-based data can and should complement conventional forest inventory practices. Full article
Show Figures

Graphical abstract

Open AccessArticle
Individual Tree Crown Segmentation in Two-Layered Dense Mixed Forests from UAV LiDAR Data
Drones 2020, 4(2), 10; https://doi.org/10.3390/drones4020010 - 02 Apr 2020
Viewed by 595
Abstract
In forests with dense mixed canopies, laser scanning is often the only effective technique to acquire forest inventory attributes, rather than structure-from-motion optical methods. This study investigates the potential of laser scanner data collected with a low-cost unmanned aerial vehicle laser scanner (UAV-LS), [...] Read more.
In forests with dense mixed canopies, laser scanning is often the only effective technique to acquire forest inventory attributes, rather than structure-from-motion optical methods. This study investigates the potential of laser scanner data collected with a low-cost unmanned aerial vehicle laser scanner (UAV-LS), for individual tree crown (ITC) delineation to derive forest biometric parameters, over two-layered dense mixed forest stands in central Italy. A raster-based local maxima region growing algorithm (itcLiDAR) and a point cloud-based algorithm (li2012) were applied to isolate individual tree crowns, compute height and crown area, estimate the diameter at breast height (DBH) and the above ground biomass (AGB) of individual trees. To maximize the level of detection rate, the ITC algorithm parameters were tuned varying 1350 setting combinations and matching the segmented trees with field measured trees. For each setting, the delineation accuracy was assessed by computing the detection rate, the omission and commission errors over three forest plots. Segmentation using itcLiDAR showed detection rates between 40% and 57%, while ITC delineation was successful at segmenting trees with DBH larger than 10 cm (detection rate ~78%), while failed to detect trees with smaller DBH (detection rate ~37%). The performance of li2012 was quite lower with the higher detection rate equal to 27%. Errors and goodness-of-fit between field-surveyed and flight-derived biometric parameters (AGB and tree height) were species-dependent, with higher error and lower r2 for shorter species that constitute the lowermost layer of the forest. Overall, while the application of UAV-LS to delineate tree crowns and estimate biometric parameters is satisfactory, its accuracy is affected by the presence of a multilayered and multispecies canopy that will require specific approaches and algorithms to better deal with the added complexity. Full article
Show Figures

Figure 1

Open AccessFeature PaperArticle
Coastal Mapping Using DJI Phantom 4 RTK in Post-Processing Kinematic Mode
Drones 2020, 4(2), 9; https://doi.org/10.3390/drones4020009 - 30 Mar 2020
Cited by 2 | Viewed by 832
Abstract
Topographic and geomorphological surveys of coastal areas usually require the aerial mapping of long and narrow sections of littoral. The georeferencing of photogrammetric models is generally based on the signalization and survey of Ground Control Points (GCPs), which are very time-consuming tasks. Direct [...] Read more.
Topographic and geomorphological surveys of coastal areas usually require the aerial mapping of long and narrow sections of littoral. The georeferencing of photogrammetric models is generally based on the signalization and survey of Ground Control Points (GCPs), which are very time-consuming tasks. Direct georeferencing with high camera location accuracy due to on-board multi-frequency GNSS receivers can limit the need for GCPs. Recently, DJI has made available the Phantom 4 Real-Time Kinematic (RTK) (DJI-P4RTK), which combines the versatility and the ease of use of previous DJI Phantom models with the advantages of a multi-frequency on-board GNSS receiver. In this paper, we investigated the accuracy of both photogrammetric models and Digital Terrain Models (DTMs) generated in Agisoft Metashape from two different image datasets (nadiral and oblique) acquired by a DJI-P4RTK. Camera locations were computed with the Post-Processing Kinematic (PPK) of the Receiver Independent Exchange Format (RINEX) file recorded by the aircraft during flight missions. A Continuously Operating Reference Station (CORS) located at a 15 km distance from the site was used for this task. The results highlighted that the oblique dataset produced very similar results, with GCPs (3D RMSE = 0.025 m) and without (3D RMSE = 0.028 m), while the nadiral dataset was affected more by the position and number of the GCPs (3D RMSE from 0.034 to 0.075 m). The introduction of a few oblique images into the nadiral dataset without any GCP improved the vertical accuracy of the model (Up RMSE from 0.052 to 0.025 m) and can represent a solution to speed up the image acquisition of nadiral datasets for PPK with the DJI-P4RTK and no GCPs. Moreover, the results of this research are compared to those obtained in RTK mode for the same datasets. The novelty of this research is the combination of a multitude of aspects regarding the DJI Phantom 4 RTK aircraft and the subsequent data processing strategies for assessing the quality of photogrammetric models, DTMs, and cross-section profiles. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles in Geomatics)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop