Special Issue "Trends in UAV Remote Sensing Applications"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 30 April 2020.

Special Issue Editors

Dr. Qinghua Guo
E-Mail Website
Guest Editor
State Key Laboratory of Vegetation and Environmental Change, Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
Interests: remote sensing, lidar applications, GIS, UAV, climate change and terrestrial ecosystem
Special Issues and Collections in MDPI journals
Dr. Yanjun Su
E-Mail Website
Guest Editor
Institute of Botany, Chinese Academy of Sciences, Beijing 100093, China
Interests: Lidar; Vegetation mapping; UAV; Forest structure
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned aerial vehicle (UAV) technology bridges the gap among spaceborne, airborne, and ground-based remote sensing data. Its characteristics of light weight and low price enable affordable observations with very high spatial and temporal resolutions. Moreover, recently, the stability, flight duration, and load capacity of UAVs increased significantly with the development of flight-control and battery technology, which enable more sensor varieties (e.g., optical sensor, lidar sensor, and radar sensor) to be mounted on small UAVs. These multi-source, UAV-sensing data with high spatial and temporal resolutions drive new developments in the field of remote sensing applications, such as powerline inspection, forest mapping and management, terrain survey, geological disaster survey, biodiversity conservation, and hydrological modelling.

For this Special Issue, we seek submissions on reviewing the trends of UAV remote sensing in, but not limited to, the fields of powerline inspection, forest mapping and management, archeology, terrain survey, geological disaster survey, biodiversity conservation, and hydrological modelling. Reviews on the trends of the integration of UAV remote sensing hardware and the fusion of multi-source UAV remote sensing data and novel and advanced research on UAV remote sensing applications are also welcomed.

Dr. Qinghua Guo
Dr. Yanjun Su
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • UAV 
  • Remote sensing
  • Trends
  • Applications

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle
Real-Time Detection of Ground Objects Based on Unmanned Aerial Vehicle Remote Sensing with Deep Learning: Application in Excavator Detection for Pipeline Safety
Remote Sens. 2020, 12(1), 182; https://doi.org/10.3390/rs12010182 - 03 Jan 2020
Abstract
Unmanned aerial vehicle (UAV) remote sensing and deep learning provide a practical approach to object detection. However, most of the current approaches for processing UAV remote-sensing data cannot carry out object detection in real time for emergencies, such as firefighting. This study proposes [...] Read more.
Unmanned aerial vehicle (UAV) remote sensing and deep learning provide a practical approach to object detection. However, most of the current approaches for processing UAV remote-sensing data cannot carry out object detection in real time for emergencies, such as firefighting. This study proposes a new approach for integrating UAV remote sensing and deep learning for the real-time detection of ground objects. Excavators, which usually threaten pipeline safety, are selected as the target object. A widely used deep-learning algorithm, namely You Only Look Once V3, is first used to train the excavator detection model on a workstation and then deployed on an embedded board that is carried by a UAV. The recall rate of the trained excavator detection model is 99.4%, demonstrating that the trained model has a very high accuracy. Then, the UAV for an excavator detection system (UAV-ED) is further constructed for operational application. UAV-ED is composed of a UAV Control Module, a UAV Module, and a Warning Module. A UAV experiment with different scenarios was conducted to evaluate the performance of the UAV-ED. The whole process from the UAV observation of an excavator to the Warning Module (350 km away from the testing area) receiving the detection results only lasted about 1.15 s. Thus, the UAV-ED system has good performance and would benefit the management of pipeline safety. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
The Impact of Spatial Resolution on the Classification of Vegetation Types in Highly Fragmented Planting Areas Based on Unmanned Aerial Vehicle Hyperspectral Images
Remote Sens. 2020, 12(1), 146; https://doi.org/10.3390/rs12010146 - 01 Jan 2020
Abstract
Fine classification of vegetation types has always been the focus and difficulty in the application field of remote sensing. Unmanned Aerial Vehicle (UAV) sensors and platforms have become important data sources in various application fields due to their high spatial resolution and flexibility. [...] Read more.
Fine classification of vegetation types has always been the focus and difficulty in the application field of remote sensing. Unmanned Aerial Vehicle (UAV) sensors and platforms have become important data sources in various application fields due to their high spatial resolution and flexibility. Especially, UAV hyperspectral images can play a significant role in the fine classification of vegetation types. However, it is not clear how the ultrahigh resolution UAV hyperspectral images react in the fine classification of vegetation types in highly fragmented planting areas, and how the spatial resolution variation of UAV images will affect the classification accuracy. Based on UAV hyperspectral images obtained from a commercial hyperspectral imaging sensor (S185) onboard a UAV platform, this paper examines the impact of spatial resolution on the classification of vegetation types in highly fragmented planting areas in southern China by aggregating 0.025 m hyperspectral image to relatively coarse spatial resolutions (0.05, 0.1, 0.25, 0.5, 1, 2.5 m). The object-based image analysis (OBIA) method was used and the effects of several segmentation scale parameters and different number of features were discussed. Finally, the classification accuracies from 84.3% to 91.3% were obtained successfully for multi-scale images. The results show that with the decrease of spatial resolution, the classification accuracies show a stable and slight fluctuation and then gradually decrease since the 0.5 m spatial resolution. The best classification accuracy does not occur in the original image, but at an intermediate level of resolution. The study also proves that the appropriate feature parameters vary at different scales. With the decrease of spatial resolution, the importance of vegetation index features has increased, and that of textural features shows an opposite trend; the appropriate segmentation scale has gradually decreased, and the appropriate number of features is 30 to 40. Therefore, it is of vital importance to select appropriate feature parameters for images in different scales so as to ensure the accuracy of classification. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
Evaluation of UAV LiDAR for Mapping Coastal Environments
Remote Sens. 2019, 11(24), 2893; https://doi.org/10.3390/rs11242893 - 04 Dec 2019
Abstract
Unmanned Aerial Vehicle (UAV)-based remote sensing techniques have demonstrated great potential for monitoring rapid shoreline changes. With image-based approaches utilizing Structure from Motion (SfM), high-resolution Digital Surface Models (DSM), and orthophotos can be generated efficiently using UAV imagery. However, image-based mapping yields relatively [...] Read more.
Unmanned Aerial Vehicle (UAV)-based remote sensing techniques have demonstrated great potential for monitoring rapid shoreline changes. With image-based approaches utilizing Structure from Motion (SfM), high-resolution Digital Surface Models (DSM), and orthophotos can be generated efficiently using UAV imagery. However, image-based mapping yields relatively poor results in low textured areas as compared to those from LiDAR. This study demonstrates the applicability of UAV LiDAR for mapping coastal environments. A custom-built UAV-based mobile mapping system is used to simultaneously collect LiDAR and imagery data. The quality of LiDAR, as well as image-based point clouds, are investigated and compared over different geomorphic environments in terms of their point density, relative and absolute accuracy, and area coverage. The results suggest that both UAV LiDAR and image-based techniques provide high-resolution and high-quality topographic data, and the point clouds generated by both techniques are compatible within a 5 to 10 cm range. UAV LiDAR has a clear advantage in terms of large and uniform ground coverage over different geomorphic environments, higher point density, and ability to penetrate through vegetation to capture points below the canopy. Furthermore, UAV LiDAR-based data acquisitions are assessed for their applicability in monitoring shoreline changes over two actively eroding sandy beaches along southern Lake Michigan, Dune Acres, and Beverly Shores, through repeated field surveys. The results indicate a considerable volume loss and ridge point retreat over an extended period of one year (May 2018 to May 2019) as well as a short storm-induced period of one month (November 2018 to December 2018). The foredune ridge recession ranges from 0 m to 9 m. The average volume loss at Dune Acres is 18.2 cubic meters per meter and 12.2 cubic meters per meter within the one-year period and storm-induced period, respectively, highlighting the importance of episodic events in coastline changes. The average volume loss at Beverly Shores is 2.8 cubic meters per meter and 2.6 cubic meters per meter within the survey period and storm-induced period, respectively. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
Orientation- and Scale-Invariant Multi-Vehicle Detection and Tracking from Unmanned Aerial Videos
Remote Sens. 2019, 11(18), 2155; https://doi.org/10.3390/rs11182155 - 16 Sep 2019
Abstract
Along with the advancement of light-weight sensing and processing technologies, unmanned aerial vehicles (UAVs) have recently become popular platforms for intelligent traffic monitoring and control. UAV-mounted cameras can capture traffic-flow videos from various perspectives providing a comprehensive insight into road conditions. To analyze [...] Read more.
Along with the advancement of light-weight sensing and processing technologies, unmanned aerial vehicles (UAVs) have recently become popular platforms for intelligent traffic monitoring and control. UAV-mounted cameras can capture traffic-flow videos from various perspectives providing a comprehensive insight into road conditions. To analyze the traffic flow from remotely captured videos, a reliable and accurate vehicle detection-and-tracking approach is required. In this paper, we propose a deep-learning framework for vehicle detection and tracking from UAV videos for monitoring traffic flow in complex road structures. This approach is designed to be invariant to significant orientation and scale variations in the videos. The detection procedure is performed by fine-tuning a state-of-the-art object detector, You Only Look Once (YOLOv3), using several custom-labeled traffic datasets. Vehicle tracking is conducted following a tracking-by-detection paradigm, where deep appearance features are used for vehicle re-identification, and Kalman filtering is used for motion estimation. The proposed methodology is tested on a variety of real videos collected by UAVs under various conditions, e.g., in late afternoons with long vehicle shadows, in dawn with vehicles lights being on, over roundabouts and interchange roads where vehicle directions change considerably, and from various viewpoints where vehicles’ appearance undergo substantial perspective distortions. The proposed tracking-by-detection approach performs efficiently at 11 frames per second on color videos of 2720p resolution. Experiments demonstrated that high detection accuracy could be achieved with an average F1-score of 92.1%. Besides, the tracking technique performs accurately, with an average multiple-object tracking accuracy (MOTA) of 81.3%. The proposed approach also addressed the shortcomings of the state-of-the-art in multi-object tracking regarding frequent identity switching, resulting in a total of only one identity switch over every 305 tracked vehicles. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
Drone-Borne Hyperspectral and Magnetic Data Integration: Otanmäki Fe-Ti-V Deposit in Finland
Remote Sens. 2019, 11(18), 2084; https://doi.org/10.3390/rs11182084 - 05 Sep 2019
Abstract
The technical evolution of unmanned aerial systems (UAS) for mineral exploration advances rapidly. Recent sensor developments and improved UAS performance open new fields for research and applications in geological and geophysical exploration among others. In this study, we introduce an integrated acquisition and [...] Read more.
The technical evolution of unmanned aerial systems (UAS) for mineral exploration advances rapidly. Recent sensor developments and improved UAS performance open new fields for research and applications in geological and geophysical exploration among others. In this study, we introduce an integrated acquisition and processing strategy for drone-borne multi-sensor surveys combining optical remote sensing and magnetic data. We deploy both fixed-wing and multicopter UAS to characterize an outcrop of the Otanmäki Fe-Ti-V deposit in central Finland. The lithology consists mainly of gabbro intrusions hosting ore bodies of magnetite-ilmenite. Large areas of the outcrop are covered by lichen and low vegetation. We use two drone-borne multi- and hyperspectral cameras operating in the visible to near-infrared parts of the electromagnetic spectrum to identify dominant geological features and the extents of ore bodies via iron-indicating proxy minerals. We apply band ratios and unsupervised and supervised image classifications on the spectral data, from which we can map surficial iron-bearing zones. We use two setups with three-axis fluxgate magnetometers deployed both by a fixed-wing and a multi-copter UAS to measure the magnetic field at various flight altitudes (15 m, 40 m, 65 m). The total magnetic intensity (TMI) computed from the individual components is used for further interpretation of ore distribution. We compare to traditional magnetic ground-based survey data to evaluate the UAS-based results. The measured anomalies and spectral data are validated and assigned to the outcropping geology and ore mineralization by performing surface spectroscopy, portable X-ray fluorescence (pXRF), magnetic susceptibility, and traditional geologic mapping. Locations of mineral zones and magnetic anomalies correlate with the established geologic map. The integrated survey strategy allowed a straightforward mapping of ore occurrences. We highlight the efficiency, spatial resolution, and reliability of UAS surveys. Acquisition time of magnetic UAS surveying surpassed ground surveying by a factor of 20 with a comparable resolution. The proposed workflow possibly facilitates surveying, particularly in areas with complicated terrain and of limited accessibility, but highlights the remaining challenges in UAS mapping. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
A Tree Species Mapping Method from UAV Images over Urban Area Using Similarity in Tree-Crown Object Histograms
Remote Sens. 2019, 11(17), 1982; https://doi.org/10.3390/rs11171982 - 22 Aug 2019
Abstract
Timely and accurate information about spatial distribution of tree species in urban areas provides crucial data for sustainable urban development, management and planning. Very high spatial resolution data collected by sensors onboard Unmanned Aerial Vehicles (UAV) systems provide rich data sources for mapping [...] Read more.
Timely and accurate information about spatial distribution of tree species in urban areas provides crucial data for sustainable urban development, management and planning. Very high spatial resolution data collected by sensors onboard Unmanned Aerial Vehicles (UAV) systems provide rich data sources for mapping tree species. This paper proposes a method of tree species mapping from UAV images over urban areas using similarity in tree-crown object histograms and a simple thresholding method. Tree-crown objects are first extracted and used as processing units in subsequent steps. Tree-crown object histograms of multiple features, i.e., spectral and height related features, are generated to quantify within-object variability. A specific tree species is extracted by comparing similarity in histogram between a target tree-crown object and reference objects. The proposed method is evaluated in mapping four different tree species using UAV multispectral ortho-images and derived Digital Surface Model (DSM) data collected in Shanghai urban area, by comparing with an existing method. The results demonstrate that the proposed method outperforms the comparative method for all four tree species, with improvements of 0.61–5.81% in overall accuracy. The proposed method provides a simple and effective way of mapping tree species over urban area. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
Emergent Challenges for Science sUAS Data Management: Fairness through Community Engagement and Best Practices Development
Remote Sens. 2019, 11(15), 1797; https://doi.org/10.3390/rs11151797 - 31 Jul 2019
Cited by 2
Abstract
The use of small Unmanned Aircraft Systems (sUAS) as platforms for data capture has rapidly increased in recent years. However, while there has been significant investment in improving the aircraft, sensors, operations, and legislation infrastructure for such, little attention has been paid to [...] Read more.
The use of small Unmanned Aircraft Systems (sUAS) as platforms for data capture has rapidly increased in recent years. However, while there has been significant investment in improving the aircraft, sensors, operations, and legislation infrastructure for such, little attention has been paid to supporting the management of the complex data capture pipeline sUAS involve. This paper reports on a four-year, community-based investigation into the tools, data practices, and challenges that currently exist for particularly researchers using sUAS as data capture platforms. The key results of this effort are: (1) sUAS captured data—as a set that is rapidly growing to include data in a wide range of Physical and Environmental Sciences, Engineering Disciplines, and many civil and commercial use cases—is characterized as both sharing many traits with traditional remote sensing data and also as exhibiting—as common across the spectrum of disciplines and use cases—novel characteristics that require novel data support infrastructure; and (2), given this characterization of sUAS data and its potential value in the identified wide variety of use case, we outline eight challenges that need to be addressed in order for the full value of sUAS captured data to be realized. We conclude that there would be significant value gained and costs saved across both commercial and academic sectors if the global sUAS user and data management communities were to address these challenges in the immediate to near future, so as to extract the maximal value of sUAS captured data for the lowest long-term effort and monetary cost. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
A Harmonious Satellite-Unmanned Aerial Vehicle-Ground Measurement Inversion Method for Monitoring Salinity in Coastal Saline Soil
Remote Sens. 2019, 11(14), 1700; https://doi.org/10.3390/rs11141700 - 18 Jul 2019
Cited by 1
Abstract
Soil salinization adversely impacts crop growth and production, especially in coastal areas which experience serious soil salinization. Therefore, rapid and accurate monitoring of the salinity and distribution of coastal saline soil is crucial. Representative areas of the Yellow River Delta (YRD)—the Hekou District [...] Read more.
Soil salinization adversely impacts crop growth and production, especially in coastal areas which experience serious soil salinization. Therefore, rapid and accurate monitoring of the salinity and distribution of coastal saline soil is crucial. Representative areas of the Yellow River Delta (YRD)—the Hekou District (the core test area with 140 sampling points) and the Kenli District (the verification area with 69 sampling points)—were investigated. Ground measurement data, unmanned aerial vehicle (UAV) multispectral imagery and Sentinel-2A multispectral imagery were used as the data sources and a satellite-UAV-ground integrated inversion of the coastal soil salinity was performed. Correlation analyses and multiple regression methods were used to construct an accurate model. Then, a UAV-based inversion model was applied to the satellite imagery with reflectance normalization. Finally, the spatial and temporal universality of the UAV-based inversion model was verified and the soil salinity inversion results were obtained. The results showed that the green, red, red-edge and near-infrared bands were significantly correlated with soil salinity and the spectral parameters significantly improved this correlation; hence, the model is more effective upon combining spectral parameters with sensitive bands, with modeling precision and verification precision of the best model being 0.743 and 0.809, respectively. The reflectance normalization yielded good results. These findings proved that applying the UAV-based model to reflectance normalized Sentinel-2A images produces results that are consistent with the actual situation. Moreover, the inversion results effectively reflect the distributions characteristic of the soil salinity in the core test area and the study area. This study integrated the advantages of satellite, UAV and ground methods and then proposed a method for the inversion of the salinity of coastal saline soils at different scales, which is of great value for real-time, rapid and accurate soil salinity monitoring applications. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
IR Thermography from UAVs to Monitor Thermal Anomalies in the Envelopes of Traditional Wine Cellars: Field Test
Remote Sens. 2019, 11(12), 1424; https://doi.org/10.3390/rs11121424 - 14 Jun 2019
Abstract
Infrared thermography (IRT) techniques for building inspection are currently becoming increasingly popular as non-destructive methods that provide valuable information about surface temperature (ST) and ST contrast (delta-T). With the advent of unmanned aerial vehicle (UAV)-mounted thermal cameras, IRT technology is now endowed with [...] Read more.
Infrared thermography (IRT) techniques for building inspection are currently becoming increasingly popular as non-destructive methods that provide valuable information about surface temperature (ST) and ST contrast (delta-T). With the advent of unmanned aerial vehicle (UAV)-mounted thermal cameras, IRT technology is now endowed with improved flexibility from an aerial perspective for the study of building envelopes. A case study cellar in Northwest (NW) Spain is used to assess the capability and reliability of low-altitude passive IRT in evaluating a typical semi-buried building. The study comparatively assesses the use of a pole-mounted FLIR B335 camera and a drone-mounted FLIR Vue Pro R camera for this purpose. Both tested IRT systems demonstrate good effectiveness in detecting thermal anomalies (e.g., thermal bridges, air leakages, constructive singularities, and moisture in the walls of the cellar) but pose some difficulties in performing accurate ST measurements under real operating conditions. Working with UAVs gives great flexibility for the inspection, but the angle of view strongly influences the radiometric data captured and must be taken into account to avoid disturbances due to specular reflections. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
Effect of Leaf Occlusion on Leaf Area Index Inversion of Maize Using UAV–LiDAR Data
Remote Sens. 2019, 11(9), 1067; https://doi.org/10.3390/rs11091067 - 06 May 2019
Cited by 2
Abstract
The leaf area index (LAI) is a key parameter for describing crop canopy structure, and is of great importance for early nutrition diagnosis and breeding research. Light detection and ranging (LiDAR) is an active remote sensing technology that can detect the vertical distribution [...] Read more.
The leaf area index (LAI) is a key parameter for describing crop canopy structure, and is of great importance for early nutrition diagnosis and breeding research. Light detection and ranging (LiDAR) is an active remote sensing technology that can detect the vertical distribution of a crop canopy. To quantitatively analyze the influence of the occlusion effect, three flights of multi-route high-density LiDAR dataset were acquired at two time points, using an Unmanned Aerial Vehicle (UAV)-mounted RIEGL VUX-1 laser scanner at an altitude of 15 m, to evaluate the validity of LAI estimation, in different layers, under different planting densities. The result revealed that normalized root-mean-square error (NRMSE) for the upper, middle, and lower layers were 10.8%, 12.4%, 42.8%, for 27,495 plants/ha, respectively. The relationship between the route direction and ridge direction was compared, and found that the direction of flight perpendicular to the maize planting ridge was better than that parallel to the maize planting ridge. The voxel-based method was used to invert the LAI, and we concluded that the optimal voxel size were concentrated on 0.040 m to 0.055 m, which was approximately 1.7 to 2.3 times of the average ground point distance. The detection of the occlusion effect in different layers under different planting densities, the relationship between the route and ridge directions, and the optimal voxel size could provide a guideline for UAV–LiDAR application in the crop canopy structure analysis. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Figure 1

Open AccessArticle
Correlation Filter-Based Visual Tracking for UAV with Online Multi-Feature Learning
Remote Sens. 2019, 11(5), 549; https://doi.org/10.3390/rs11050549 - 06 Mar 2019
Cited by 1
Abstract
In this paper, a novel online learning-based tracker is presented for the unmanned aerial vehicle (UAV) in different types of tracking applications, such as pedestrian following, automotive chasing, and building inspection. The presented tracker uses novel features, i.e., intensity, color names, and saliency, [...] Read more.
In this paper, a novel online learning-based tracker is presented for the unmanned aerial vehicle (UAV) in different types of tracking applications, such as pedestrian following, automotive chasing, and building inspection. The presented tracker uses novel features, i.e., intensity, color names, and saliency, to respectively represent both the tracking object and its background information in a background-aware correlation filter (BACF) framework instead of only using the histogram of oriented gradient (HOG) feature. In other words, four different voters, which combine the aforementioned four features with the BACF framework, are used to locate the object independently. After obtaining the response maps generated by aforementioned voters, a new strategy is proposed to fuse these response maps effectively. In the proposed response map fusion strategy, the peak-to-sidelobe ratio, which measures the peak strength of the response, is utilized to weight each response, thereby filtering the noise for each response and improving final fusion map. Eventually, the fused response map is used to accurately locate the object. Qualitative and quantitative experiments on 123 challenging UAV image sequences, i.e., UAV123, show that the novel tracking approach, i.e., OMFL tracker, performs favorably against 13 state-of-the-art trackers in terms of accuracy, robustness, and efficiency. In addition, the multi-feature learning approach is able to improve the object tracking performance compared to the tracking method with single-feature learning applied in literature. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessArticle
Rigorous Boresight Self-Calibration of Mobile and UAV LiDAR Scanning Systems by Strip Adjustment
Remote Sens. 2019, 11(4), 442; https://doi.org/10.3390/rs11040442 - 20 Feb 2019
Cited by 4
Abstract
Mobile LiDAR Scanning (MLS) systems and UAV LiDAR Scanning (ULS) systems equipped with precise Global Navigation Satellite System (GNSS)/Inertial Measurement Unit (IMU) positioning units and LiDAR sensors are used at an increasing rate for the acquisition of high density and high accuracy point [...] Read more.
Mobile LiDAR Scanning (MLS) systems and UAV LiDAR Scanning (ULS) systems equipped with precise Global Navigation Satellite System (GNSS)/Inertial Measurement Unit (IMU) positioning units and LiDAR sensors are used at an increasing rate for the acquisition of high density and high accuracy point clouds because of their safety and efficiency. Without careful calibration of the boresight angles of the MLS systems and ULS systems, the accuracy of data acquired would degrade severely. This paper proposes an automatic boresight self-calibration method for the MLS systems and ULS systems using acquired multi-strip point clouds. The boresight angles of MLS systems and ULS systems are expressed in the direct geo-referencing equation and corrected by minimizing the misalignments between points scanned from different directions and different strips. Two datasets scanned by MLS systems and two datasets scanned by ULS systems were used to verify the proposed boresight calibration method. The experimental results show that the root mean square errors (RMSE) of misalignments between point correspondences of the four datasets after boresight calibration are 2.1 cm, 3.4 cm, 5.4 cm, and 6.1 cm, respectively, which are reduced by 59.6%, 75.4%, 78.0%, and 94.8% compared with those before boresight calibration. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Figure 1

Review

Jump to: Research, Other

Open AccessReview
A Review on IoT Deep Learning UAV Systems for Autonomous Obstacle Detection and Collision Avoidance
Remote Sens. 2019, 11(18), 2144; https://doi.org/10.3390/rs11182144 - 14 Sep 2019
Abstract
Advances in Unmanned Aerial Vehicles (UAVs), also known as drones, offer unprecedented opportunities to boost a wide array of large-scale Internet of Things (IoT) applications. Nevertheless, UAV platforms still face important limitations mainly related to autonomy and weight that impact their remote sensing [...] Read more.
Advances in Unmanned Aerial Vehicles (UAVs), also known as drones, offer unprecedented opportunities to boost a wide array of large-scale Internet of Things (IoT) applications. Nevertheless, UAV platforms still face important limitations mainly related to autonomy and weight that impact their remote sensing capabilities when capturing and processing the data required for developing autonomous and robust real-time obstacle detection and avoidance systems. In this regard, Deep Learning (DL) techniques have arisen as a promising alternative for improving real-time obstacle detection and collision avoidance for highly autonomous UAVs. This article reviews the most recent developments on DL Unmanned Aerial Systems (UASs) and provides a detailed explanation on the main DL techniques. Moreover, the latest DL-UAV communication architectures are studied and their most common hardware is analyzed. Furthermore, this article enumerates the most relevant open challenges for current DL-UAV solutions, thus allowing future researchers to define a roadmap for devising the new generation affordable autonomous DL-UAV IoT solutions. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessReview
Unmanned Aerial Vehicle for Remote Sensing Applications—A Review
Remote Sens. 2019, 11(12), 1443; https://doi.org/10.3390/rs11121443 - 18 Jun 2019
Cited by 9
Abstract
The unmanned aerial vehicle (UAV) sensors and platforms nowadays are being used in almost every application (e.g., agriculture, forestry, and mining) that needs observed information from the top or oblique views. While they intend to be a general remote sensing (RS) tool, the [...] Read more.
The unmanned aerial vehicle (UAV) sensors and platforms nowadays are being used in almost every application (e.g., agriculture, forestry, and mining) that needs observed information from the top or oblique views. While they intend to be a general remote sensing (RS) tool, the relevant RS data processing and analysis methods are still largely ad-hoc to applications. Although the obvious advantages of UAV data are their high spatial resolution and flexibility in acquisition and sensor integration, there is in general a lack of systematic analysis on how these characteristics alter solutions for typical RS tasks such as land-cover classification, change detection, and thematic mapping. For instance, the ultra-high-resolution data (less than 10 cm of Ground Sampling Distance (GSD)) bring more unwanted classes of objects (e.g., pedestrian and cars) in land-cover classification; the often available 3D data generated from photogrammetric images call for more advanced techniques for geometric and spectral analysis. In this paper, we perform a critical review on RS tasks that involve UAV data and their derived products as their main sources including raw perspective images, digital surface models, and orthophotos. In particular, we focus on solutions that address the “new” aspects of the UAV data including (1) ultra-high resolution; (2) availability of coherent geometric and spectral data; and (3) capability of simultaneously using multi-sensor data for fusion. Based on these solutions, we provide a brief summary of existing examples of UAV-based RS in agricultural, environmental, urban, and hazards assessment applications, etc., and by discussing their practical potentials, we share our views in their future research directions and draw conclusive remarks. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Open AccessReview
Surveying Wild Animals from Satellites, Manned Aircraft and Unmanned Aerial Systems (UASs): A Review
Remote Sens. 2019, 11(11), 1308; https://doi.org/10.3390/rs11111308 - 01 Jun 2019
Cited by 1
Abstract
This article reviews studies regarding wild animal surveys based on multiple platforms, including satellites, manned aircraft, and unmanned aircraft systems (UASs), and focuses on the data used, animal detection methods, and their accuracies. We also discuss the advantages and limitations of each type [...] Read more.
This article reviews studies regarding wild animal surveys based on multiple platforms, including satellites, manned aircraft, and unmanned aircraft systems (UASs), and focuses on the data used, animal detection methods, and their accuracies. We also discuss the advantages and limitations of each type of remote sensing data and highlight some new research opportunities and challenges. Submeter very-high-resolution (VHR) spaceborne imagery has potential in modeling the population dynamics of large (>0.6 m) wild animals at large spatial and temporal scales, but has difficulty discerning small (<0.6 m) animals at the species level, although high-resolution commercial satellites, such as WorldView-3 and -4, have been able to collect images with a ground resolution of up to 0.31 m in panchromatic mode. This situation will not change unless the satellite image resolution is greatly improved in the future. Manned aerial surveys have long been employed to capture the centimeter-scale images required for animal censuses over large areas. However, such aerial surveys are costly to implement in small areas and can cause significant disturbances to wild animals because of their noise. In contrast, UAS surveys are seen as a safe, convenient and less expensive alternative to ground-based and conventional manned aerial surveys, but most UASs can cover only small areas. The proposed use of UAS imagery in combination with VHR satellite imagery would produce critical population data for large wild animal species and colonies over large areas. The development of software systems for automatically producing image mosaics and recognizing wild animals will further improve survey efficiency. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Other

Jump to: Research, Review

Open AccessLetter
The Influence of Vegetation Characteristics on Individual Tree Segmentation Methods with Airborne LiDAR Data
Remote Sens. 2019, 11(23), 2880; https://doi.org/10.3390/rs11232880 - 03 Dec 2019
Abstract
This study investigated the effects of forest type, leaf area index (LAI), canopy cover (CC), tree density (TD), and the coefficient of variation of tree height (CVTH) on the accuracy of different individual tree segmentation methods (i.e., canopy height model, pit-free canopy height [...] Read more.
This study investigated the effects of forest type, leaf area index (LAI), canopy cover (CC), tree density (TD), and the coefficient of variation of tree height (CVTH) on the accuracy of different individual tree segmentation methods (i.e., canopy height model, pit-free canopy height model (PFCHM), point cloud, and layer stacking seed point) with LiDAR data. A total of 120 sites in the Sierra Nevada Forest (California) and Shavers Creek Watershed (Pennsylvania) of the United States, covering various vegetation types and characteristics, were used to analyze the performance of the four selected individual tree segmentation algorithms. The results showed that the PFCHM performed best in all forest types, especially in conifer forests. The main forest characteristics influencing segmentation methods were LAI and CC, LAI and TD, and CVTH in conifer, broadleaf, and mixed forests, respectively. Most of the vegetation characteristics (i.e., LAI, CC, and TD) negatively correlated with all segmentation methods, while the effect of CVTH varied with forest type. These results can help guide the selection of individual tree segmentation method given the influence of vegetation characteristics. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Graphical abstract

Back to TopTop