sensors-logo

Journal Browser

Journal Browser

Special Issue "UAV or Drones for Remote Sensing Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors, Control, and Telemetry".

Deadline for manuscript submissions: closed (15 December 2017).

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Dr. Felipe Gonzalez Toro
E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, Australian Research Center for Aerospace Automation (ARCAA), Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia
Interests: UAV, gas sensors, imaging sensors; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection, sense and avoid
Special Issues and Collections in MDPI journals
Prof. Dr. Antonios Tsourdos
E-Mail Website
Co-Guest Editor
Centre for Autonomous and Cyber-Physical Systems, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, MK43 0AL, UK
Interests: UAV; sensor and data fusion; network decision systems; information management; sense and avoid; guidance and navigation; target tracking and estimation
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid development and growth of UAVs as a remote sensing platform, as well as advances in the miniaturization of instrumentation and data systems, have resulted in an increasing uptake of this technology in the environmental and remote sensing science community.

Even though, tough regulations across the globe may still limit the broader use of UAVs, their use is precision agriculture, ecology, atmospheric research, disaster response bio-security, ecological and reef monitoring, forestry, fire monitoring, quick response measurements for emergency disaster, Earth science research, volcanic gas sampling, monitoring of gas pipelines, mining plumes, humanitarian observations and biological/chemosensing tasks, continues to increase.

The Special Issue provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of UAV developments, applications of UAVs for Remote Sensing, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities.

Prospective authors are invited to contribute to this Special Issue of Sensors (Impact Factor: 2.677 (2016); 5-Year Impact Factor: 2.964 (2016)) by submitting an original manuscript.

The scope of this Special Issue includes, but it is not limited to:

  • Applications of UAV in GPS denied environments

  • improvements in UAV sensor technology

  • microUAV applications

  • UAV sensor design,

  • descriptions of processing algorithms applied to UAV-based imagery datasets

  • the use of Optical, multi-spectral, hyperspectral, laser, and optical SAR technologies onboard UAVs

  • Gas analyzers and sensors

  • Artificial intelligence and data mining based strategies from UAV-acquired datasets

  • UAV onboard data storage, transmission, and retrieval

  • Collaborative strategies and mechanisms to control multiple UAVs and sensor networks

  • Multiple platform UAV, AUV, ground robot networks

  • UAV sensor applications including: Precision agriculture; construction, mining, pest detection, forestry, mammal species tracking search and rescue; target tracking, the monitoring of the atmosphere; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; reef, volcanic monitoring, Earth science research pollution monitoring, micro-climates , land use precision agriculture, ecology, atmospheric research, bio-security, forestry, fire monitoring, quick response measurements for emergency disaster, pollution monitoring, volcanic gas sampling, monitoring of gas pipelines, biological/chemosensing tasks, and humanitarian observations.

Assoc. Prof. Dr. Felipe Gonzalez Toro
Prof. Dr. Antonios Tsourdos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (38 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
UAVs, Hyperspectral Remote Sensing, and Machine Learning Revolutionizing Reef Monitoring
Sensors 2018, 18(7), 2026; https://doi.org/10.3390/s18072026 - 25 Jun 2018
Cited by 2
Abstract
Recent advances in unmanned aerial system (UAS) sensed imagery, sensor quality/size, and geospatial image processing can enable UASs to rapidly and continually monitor coral reefs, to determine the type of coral and signs of coral bleaching. This paper describes an unmanned aerial vehicle [...] Read more.
Recent advances in unmanned aerial system (UAS) sensed imagery, sensor quality/size, and geospatial image processing can enable UASs to rapidly and continually monitor coral reefs, to determine the type of coral and signs of coral bleaching. This paper describes an unmanned aerial vehicle (UAV) remote sensing methodology to increase the efficiency and accuracy of existing surveillance practices. The methodology uses a UAV integrated with advanced digital hyperspectral, ultra HD colour (RGB) sensors, and machine learning algorithms. This paper describes the combination of airborne RGB and hyperspectral imagery with in-water survey data of several types in-water survey of coral under diverse levels of bleaching. The paper also describes the technology used, the sensors, the UAS, the flight operations, the processing workflow of the datasets, the methods for combining multiple airborne and in-water datasets, and finally presents relevant results of material classification. The development of the methodology for the collection and analysis of airborne hyperspectral and RGB imagery would provide coral reef researchers, other scientists, and UAV practitioners with reliable data collection protocols and faster processing techniques to achieve remote sensing objectives. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Aerial Mapping of Forests Affected by Pathogens Using UAVs, Hyperspectral Sensors, and Artificial Intelligence
Sensors 2018, 18(4), 944; https://doi.org/10.3390/s18040944 - 22 Mar 2018
Cited by 10
Abstract
The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based [...] Read more.
The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based tests have limitations in verifying the rapid proliferation of exotic pathogens and deterioration rates in hosts. Recent remote sensing approaches have offered fast, broad-scale, and affordable surveys as well as additional indicators that can complement on-ground tests. This paper proposes a framework that consolidates site-based insights and remote sensing capabilities to detect and segment deteriorations by fungal pathogens in natural and plantation forests. This approach is illustrated with an experimentation case of myrtle rust (Austropuccinia psidii) on paperbark tea trees (Melaleuca quinquenervia) in New South Wales (NSW), Australia. The method integrates unmanned aerial vehicles (UAVs), hyperspectral image sensors, and data processing algorithms using machine learning. Imagery is acquired using a Headwall Nano-Hyperspec ® camera, orthorectified in Headwall SpectralView ® , and processed in Python programming language using eXtreme Gradient Boosting (XGBoost), Geospatial Data Abstraction Library (GDAL), and Scikit-learn third-party libraries. In total, 11,385 samples were extracted and labelled into five classes: two classes for deterioration status and three classes for background objects. Insights reveal individual detection rates of 95% for healthy trees, 97% for deteriorated trees, and a global multiclass detection rate of 97%. The methodology is versatile to be applied to additional datasets taken with different image sensors, and the processing of large datasets with freeware tools. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery
Sensors 2018, 18(3), 712; https://doi.org/10.3390/s18030712 - 27 Feb 2018
Cited by 21
Abstract
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, [...] Read more.
An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Online Aerial Terrain Mapping for Ground Robot Navigation
Sensors 2018, 18(2), 630; https://doi.org/10.3390/s18020630 - 20 Feb 2018
Cited by 4
Abstract
This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle’s overhead view to inform the ground vehicle’s path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These [...] Read more.
This work presents a collaborative unmanned aerial and ground vehicle system which utilizes the aerial vehicle’s overhead view to inform the ground vehicle’s path planning in real time. The aerial vehicle acquires imagery which is assembled into a orthomosaic and then classified. These terrain classes are used to estimate relative navigation costs for the ground vehicle so energy-efficient paths may be generated and then executed. The two vehicles are registered in a common coordinate frame using a real-time kinematic global positioning system (RTK GPS) and all image processing is performed onboard the unmanned aerial vehicle, which minimizes the data exchanged between the vehicles. This paper describes the architecture of the system and quantifies the registration errors between the vehicles. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Devising Mobile Sensing and Actuation Infrastructure with Drones
Sensors 2018, 18(2), 624; https://doi.org/10.3390/s18020624 - 19 Feb 2018
Cited by 8
Abstract
Vast applications and services have been enabled as the number of mobile or sensing devices with communication capabilities has grown. However, managing the devices, integrating networks or combining services across different networks has become a new problem since each network is not directly [...] Read more.
Vast applications and services have been enabled as the number of mobile or sensing devices with communication capabilities has grown. However, managing the devices, integrating networks or combining services across different networks has become a new problem since each network is not directly connected via back-end core networks or servers. The issue is and has been discussed especially in wireless sensor and actuator networks (WSAN). In such systems, sensors and actuators are tightly coupled, so when an independent WSAN needs to collaborate with other networks, it is difficult to adequately combine them into an integrated infrastructure. In this paper, we propose drone-as-a-gateway (DaaG), which uses drones as mobile gateways to interconnect isolated networks or combine independent services. Our system contains features that focus on the service being provided in the order of importance, different from an adaptive simple mobile sink system or delay-tolerant system. Our simulation results have shown that the proposed system is able to activate actuators in the order of importance of the service, which uses separate sensors’ data, and it consumes almost the same time in comparison with other path-planning algorithms. Moreover, we have implemented DaaG and presented results in a field test to show that it can enable large-scale on-demand deployment of sensing and actuation infrastructure or the Internet of Things (IoT). Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands
Sensors 2018, 18(2), 605; https://doi.org/10.3390/s18020605 - 16 Feb 2018
Cited by 7
Abstract
The monitoring of invasive grasses and vegetation in remote areas is challenging, costly, and on the ground sometimes dangerous. Satellite and manned aircraft surveys can assist but their use may be limited due to the ground sampling resolution or cloud cover. Straightforward and [...] Read more.
The monitoring of invasive grasses and vegetation in remote areas is challenging, costly, and on the ground sometimes dangerous. Satellite and manned aircraft surveys can assist but their use may be limited due to the ground sampling resolution or cloud cover. Straightforward and accurate surveillance methods are needed to quantify rates of grass invasion, offer appropriate vegetation tracking reports, and apply optimal control methods. This paper presents a pipeline process to detect and generate a pixel-wise segmentation of invasive grasses, using buffel grass (Cenchrus ciliaris) and spinifex (Triodia sp.) as examples. The process integrates unmanned aerial vehicles (UAVs) also commonly known as drones, high-resolution red, green, blue colour model (RGB) cameras, and a data processing approach based on machine learning algorithms. The methods are illustrated with data acquired in Cape Range National Park, Western Australia (WA), Australia, orthorectified in Agisoft Photoscan Pro, and processed in Python programming language, scikit-learn, and eXtreme Gradient Boosting (XGBoost) libraries. In total, 342,626 samples were extracted from the obtained data set and labelled into six classes. Segmentation results provided an individual detection rate of 97% for buffel grass and 96% for spinifex, with a global multiclass pixel-wise detection rate of 97%. Obtained results were robust against illumination changes, object rotation, occlusion, background cluttering, and floral density variation. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges
Sensors 2018, 18(2), 586; https://doi.org/10.3390/s18020586 - 14 Feb 2018
Cited by 18
Abstract
This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in [...] Read more.
This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
UAV-Assisted Dynamic Clustering of Wireless Sensor Networks for Crop Health Monitoring
Sensors 2018, 18(2), 555; https://doi.org/10.3390/s18020555 - 11 Feb 2018
Cited by 7
Abstract
In this study, a crop health monitoring system is developed by using state of the art technologies including wireless sensors and Unmanned Aerial Vehicles (UAVs). Conventionally data is collected from sensor nodes either by fixed base stations or mobile sinks. Mobile sinks are [...] Read more.
In this study, a crop health monitoring system is developed by using state of the art technologies including wireless sensors and Unmanned Aerial Vehicles (UAVs). Conventionally data is collected from sensor nodes either by fixed base stations or mobile sinks. Mobile sinks are considered a better choice nowadays due to their improved network coverage and energy utilization. Usually, the mobile sink is used in two ways: either it goes for random walk to find the scattered nodes and collect data, or follows a pre-defined path established by the ground network/clusters. Neither of these options is suitable in our scenario due to the factors like dynamic data collection, the strict targeted area required to be scanned, unavailability of a large number of nodes, dynamic path of the UAV, and most importantly, none of these are known in advance. The contribution of this paper is the formation of dynamic runtime clusters of field sensors by considering the above mentioned factors. Furthermore a mechanism (Bayesian classifier) is defined to select best node as cluster head. The proposed system is validated through simulation results, lab and infield experiments using concept devices. The obtained results are encouraging, especially in terms of deployment time, energy, efficiency, throughput and ease of use. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Experimental Study of Multispectral Characteristics of an Unmanned Aerial Vehicle at Different Observation Angles
Sensors 2018, 18(2), 428; https://doi.org/10.3390/s18020428 - 01 Feb 2018
Cited by 2
Abstract
This study investigates multispectral characteristics of an unmanned aerial vehicle (UAV) at different observation angles by experiment. The UAV and its engine are tested on the ground in the cruise state. Spectral radiation intensities at different observation angles are obtained in the infrared [...] Read more.
This study investigates multispectral characteristics of an unmanned aerial vehicle (UAV) at different observation angles by experiment. The UAV and its engine are tested on the ground in the cruise state. Spectral radiation intensities at different observation angles are obtained in the infrared band of 0.9–15 μm by a spectral radiometer. Meanwhile, infrared images are captured separately by long-wavelength infrared (LWIR), mid-wavelength infrared (MWIR), and short-wavelength infrared (SWIR) cameras. Additionally, orientation maps of the radiation area and radiance are obtained. The results suggest that the spectral radiation intensity of the UAV is determined by its exhaust plume and that the main infrared emission bands occur at 2.7 μm and 4.3 μm. At observation angles in the range of 0°–90°, the radiation area of the UAV in MWIR band is greatest; however, at angles greater than 90°, the radiation area in the SWIR band is greatest. In addition, the radiance of the UAV at an angle of 0° is strongest. These conclusions can guide IR stealth technique development for UAVs. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Automatic Coregistration Algorithm to Remove Canopy Shaded Pixels in UAV-Borne Thermal Images to Improve the Estimation of Crop Water Stress Index of a Drip-Irrigated Cabernet Sauvignon Vineyard
Sensors 2018, 18(2), 397; https://doi.org/10.3390/s18020397 - 30 Jan 2018
Cited by 9
Abstract
Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index [...] Read more.
Water stress caused by water scarcity has a negative impact on the wine industry. Several strategies have been implemented for optimizing water application in vineyards. In this regard, midday stem water potential (SWP) and thermal infrared (TIR) imaging for crop water stress index (CWSI) have been used to assess plant water stress on a vine-by-vine basis without considering the spatial variability. Unmanned Aerial Vehicle (UAV)-borne TIR images are used to assess the canopy temperature variability within vineyards that can be related to the vine water status. Nevertheless, when aerial TIR images are captured over canopy, internal shadow canopy pixels cannot be detected, leading to mixed information that negatively impacts the relationship between CWSI and SWP. This study proposes a methodology for automatic coregistration of thermal and multispectral images (ranging between 490 and 900 nm) obtained from a UAV to remove shadow canopy pixels using a modified scale invariant feature transformation (SIFT) computer vision algorithm and Kmeans++ clustering. Our results indicate that our proposed methodology improves the relationship between CWSI and SWP when shadow canopy pixels are removed from a drip-irrigated Cabernet Sauvignon vineyard. In particular, the coefficient of determination (R2) increased from 0.64 to 0.77. In addition, values of the root mean square error (RMSE) and standard error (SE) decreased from 0.2 to 0.1 MPa and 0.24 to 0.16 MPa, respectively. Finally, this study shows that the negative effect of shadow canopy pixels was higher in those vines with water stress compared with well-watered vines. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A Novel Methodology for Improving Plant Pest Surveillance in Vineyards and Crops Using UAV-Based Hyperspectral and Spatial Data
Sensors 2018, 18(1), 260; https://doi.org/10.3390/s18010260 - 17 Jan 2018
Cited by 19
Abstract
Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to [...] Read more.
Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used—the sensors, the UAV, and the flight operations—the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analising and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera
Sensors 2018, 18(1), 225; https://doi.org/10.3390/s18010225 - 14 Jan 2018
Cited by 4
Abstract
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of [...] Read more.
In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs
Sensors 2018, 18(1), 206; https://doi.org/10.3390/s18010206 - 13 Jan 2018
Abstract
The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of [...] Read more.
The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Assessment of the Possibility of Using Unmanned Aerial Vehicles (UAVs) for the Documentation of Hiking Trails in Alpine Areas
Sensors 2018, 18(1), 81; https://doi.org/10.3390/s18010081 - 29 Dec 2017
Cited by 6
Abstract
The research described in this paper deals with the documentation of hiking trails in alpine areas. The study presents a novel research topic, applying up-to-date survey techniques and top quality equipment with practical applications in nature conservation. The research presents the initial part [...] Read more.
The research described in this paper deals with the documentation of hiking trails in alpine areas. The study presents a novel research topic, applying up-to-date survey techniques and top quality equipment with practical applications in nature conservation. The research presents the initial part of the process—capturing imagery, photogrammetric processing, quality checking, and a discussion on possibilities of the further data analysis. The research described in this article was conducted in the Tatra National Park (TNP) in Poland, which is considered as one of the most-visited national parks in Europe. The exceptional popularity of this place is responsible for intensification of morphogenetic processes, resulting in the development of numerous forms of erosion. This article presents the outcomes of research, whose purpose was to verify the usability of UAVs to check the condition of hiking trails in alpine areas. An octocopter equipped with a non-metric camera was used for measurements. Unlike traditional methods of measuring landscape features, such a solution facilitates acquisition of quasi-continuous data that has uniform resolution throughout the study area and high spatial accuracy. It is also a relatively cheap technology, which is its main advantage over equally popular laser scanning. The paper presents the complete methodology of data acquisition in harsh conditions and demanding locations of hiking trails on steep Tatra slopes. The paper also describes stages that lead to the elaboration of basic photogrammetric products relying on structure from motion (SfM) technology and evaluates the accuracy of the materials obtained. Finally, it shows the applicability of the prepared products to the evaluation of the spatial reach and intensity of erosion along hiking trails, and to the study of plant succession or tree stand condition in the area located next to hiking trails. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Vision-Based Target Finding and Inspection of a Ground Target Using a Multirotor UAV System
Sensors 2017, 17(12), 2929; https://doi.org/10.3390/s17122929 - 17 Dec 2017
Cited by 10
Abstract
In this paper, a system that uses an algorithm for target detection and navigation and a multirotor Unmanned Aerial Vehicle (UAV) for finding a ground target and inspecting it closely is presented. The system can also be used for accurate and safe delivery [...] Read more.
In this paper, a system that uses an algorithm for target detection and navigation and a multirotor Unmanned Aerial Vehicle (UAV) for finding a ground target and inspecting it closely is presented. The system can also be used for accurate and safe delivery of payloads or spot spraying applications in site-specific crop management. A downward-looking camera attached to a multirotor is used to find the target on the ground. The UAV descends to the target and hovers above the target for a few seconds to inspect the target. A high-level decision algorithm based on an OODA (observe, orient, decide, and act) loop was developed as a solution to address the problem. Navigation of the UAV was achieved by continuously sending local position messages to the autopilot via Mavros. The proposed system performed hovering above the target in three different stages: locate, descend, and hover. The system was tested in multiple trials, in simulations and outdoor tests, from heights of 10 m to 40 m. Results show that the system is highly reliable and robust to sensor errors, drift, and external disturbance. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Observing Spring and Fall Phenology in a Deciduous Forest with Aerial Drone Imagery
Sensors 2017, 17(12), 2852; https://doi.org/10.3390/s17122852 - 08 Dec 2017
Cited by 10
Abstract
Plant phenology is a sensitive indicator of the effects of global change on terrestrial ecosystems and controls the timing of key ecosystem functions including photosynthesis and transpiration. Aerial drone imagery and photogrammetric techniques promise to advance the study of phenology by enabling the [...] Read more.
Plant phenology is a sensitive indicator of the effects of global change on terrestrial ecosystems and controls the timing of key ecosystem functions including photosynthesis and transpiration. Aerial drone imagery and photogrammetric techniques promise to advance the study of phenology by enabling the creation of distortion-free orthomosaics of plant canopies at the landscape scale, but with branch-level image resolution. The main goal of this study is to determine the leaf life cycle events corresponding to phenological metrics derived from automated analyses based on color indices calculated from drone imagery. For an oak-dominated, temperate deciduous forest in the northeastern USA, we find that plant area index (PAI) correlates with a canopy greenness index during spring green-up, and a canopy redness index during autumn senescence. Additionally, greenness and redness metrics are significantly correlated with the timing of budburst and leaf expansion on individual trees in spring. However, we note that the specific color index for individual trees must be carefully chosen if new foliage in spring appears red, rather than green—which we observed for some oak trees. In autumn, both decreasing greenness and increasing redness correlate with leaf senescence. Maximum redness indicates the beginning of leaf fall, and the progression of leaf fall correlates with decreasing redness. We also find that cooler air temperature microclimates near a forest edge bordering a wetland advance the onset of senescence. These results demonstrate the use of drones for characterizing the organismic-level variability of phenology in a forested landscape and advance our understanding of which phenophase transitions correspond to color-based metrics derived from digital image analysis. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Robust Vehicle Detection in Aerial Images Based on Cascaded Convolutional Neural Networks
Sensors 2017, 17(12), 2720; https://doi.org/10.3390/s17122720 - 24 Nov 2017
Cited by 19
Abstract
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural [...] Read more.
Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Designing and Testing a UAV Mapping System for Agricultural Field Surveying
Sensors 2017, 17(12), 2703; https://doi.org/10.3390/s17122703 - 23 Nov 2017
Cited by 21
Abstract
A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory [...] Read more.
A Light Detection and Ranging (LiDAR) sensor mounted on an Unmanned Aerial Vehicle (UAV) can map the overflown environment in point clouds. Mapped canopy heights allow for the estimation of crop biomass in agriculture. The work presented in this paper contributes to sensory UAV setup design for mapping and textual analysis of agricultural fields. LiDAR data are combined with data from Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU) sensors to conduct environment mapping for point clouds. The proposed method facilitates LiDAR recordings in an experimental winter wheat field. Crop height estimates ranging from 0.35–0.58 m are correlated to the applied nitrogen treatments of 0–300 kg N ha . The LiDAR point clouds are recorded, mapped, and analysed using the functionalities of the Robot Operating System (ROS) and the Point Cloud Library (PCL). Crop volume estimation is based on a voxel grid with a spatial resolution of 0.04 × 0.04 × 0.001 m. Two different flight patterns are evaluated at an altitude of 6 m to determine the impacts of the mapped LiDAR measurements on crop volume estimations. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments
Sensors 2017, 17(11), 2535; https://doi.org/10.3390/s17112535 - 03 Nov 2017
Cited by 19
Abstract
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems [...] Read more.
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Artificial Neural Network to Predict Vine Water Status Spatial Variability Using Multispectral Information Obtained from an Unmanned Aerial Vehicle (UAV)
Sensors 2017, 17(11), 2488; https://doi.org/10.3390/s17112488 - 30 Oct 2017
Cited by 12
Abstract
Water stress, which affects yield and wine quality, is often evaluated using the midday stem water potential (Ψstem). However, this measurement is acquired on a per plant basis and does not account for the assessment of vine water status spatial variability. [...] Read more.
Water stress, which affects yield and wine quality, is often evaluated using the midday stem water potential (Ψstem). However, this measurement is acquired on a per plant basis and does not account for the assessment of vine water status spatial variability. The use of multispectral cameras mounted on unmanned aerial vehicle (UAV) is capable to capture the variability of vine water stress in a whole field scenario. It has been reported that conventional multispectral indices (CMI) that use information between 500–800 nm, do not accurately predict plant water status since they are not sensitive to water content. The objective of this study was to develop artificial neural network (ANN) models derived from multispectral images to predict the Ψstem spatial variability of a drip-irrigated Carménère vineyard in Talca, Maule Region, Chile. The coefficient of determination (R2) obtained between ANN outputs and ground-truth measurements of Ψstem were between 0.56–0.87, with the best performance observed for the model that included the bands 550, 570, 670, 700 and 800 nm. Validation analysis indicated that the ANN model could estimate Ψstem with a mean absolute error (MAE) of 0.1 MPa, root mean square error (RMSE) of 0.12 MPa, and relative error (RE) of −9.1%. For the validation of the CMI, the MAE, RMSE and RE values were between 0.26–0.27 MPa, 0.32–0.34 MPa and −24.2–25.6%, respectively. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
Comparing RIEGL RiCOPTER UAV LiDAR Derived Canopy Height and DBH with Terrestrial LiDAR
Sensors 2017, 17(10), 2371; https://doi.org/10.3390/s17102371 - 17 Oct 2017
Cited by 34
Abstract
In recent years, LIght Detection And Ranging (LiDAR) and especially Terrestrial Laser Scanning (TLS) systems have shown the potential to revolutionise forest structural characterisation by providing unprecedented 3D data. However, manned Airborne Laser Scanning (ALS) requires costly campaigns and produces relatively low point [...] Read more.
In recent years, LIght Detection And Ranging (LiDAR) and especially Terrestrial Laser Scanning (TLS) systems have shown the potential to revolutionise forest structural characterisation by providing unprecedented 3D data. However, manned Airborne Laser Scanning (ALS) requires costly campaigns and produces relatively low point density, while TLS is labour intense and time demanding. Unmanned Aerial Vehicle (UAV)-borne laser scanning can be the way in between. In this study, we present first results and experiences with the RIEGL RiCOPTER with VUX ® -1UAV ALS system and compare it with the well tested RIEGL VZ-400 TLS system. We scanned the same forest plots with both systems over the course of two days. We derived Digital Terrain Model (DTMs), Digital Surface Model (DSMs) and finally Canopy Height Model (CHMs) from the resulting point clouds. ALS CHMs were on average 11.5 c m higher in five plots with different canopy conditions. This showed that TLS could not always detect the top of canopy. Moreover, we extracted trunk segments of 58 trees for ALS and TLS simultaneously, of which 39 could be used to model Diameter at Breast Height (DBH). ALS DBH showed a high agreement with TLS DBH with a correlation coefficient of 0.98 and root mean square error of 4.24 c m . We conclude that RiCOPTER has the potential to perform comparable to TLS for estimating forest canopy height and DBH under the studied forest conditions. Further research should be directed to testing UAV-borne LiDAR for explicit 3D modelling of whole trees to estimate tree volume and subsequently Above-Ground Biomass (AGB). Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images
Sensors 2017, 17(10), 2352; https://doi.org/10.3390/s17102352 - 15 Oct 2017
Cited by 9
Abstract
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection [...] Read more.
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Detection of Nuclear Sources by UAV Teleoperation Using a Visuo-Haptic Augmented Reality Interface
Sensors 2017, 17(10), 2234; https://doi.org/10.3390/s17102234 - 29 Sep 2017
Cited by 11
Abstract
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the [...] Read more.
A visuo-haptic augmented reality (VHAR) interface is presented enabling an operator to teleoperate an unmanned aerial vehicle (UAV) equipped with a custom CdZnTe-based spectroscopic gamma-ray detector in outdoor environments. The task is to localize nuclear radiation sources, whose location is unknown to the user, without the close exposure of the operator. The developed detector also enables identification of the localized nuclear sources. The aim of the VHAR interface is to increase the situation awareness of the operator. The user teleoperates the UAV using a 3DOF haptic device that provides an attractive force feedback around the location of the most intense detected radiation source. Moreover, a fixed camera on the ground observes the environment where the UAV is flying. A 3D augmented reality scene is displayed on a computer screen accessible to the operator. Multiple types of graphical overlays are shown, including sensor data acquired by the nuclear radiation detector, a virtual cursor that tracks the UAV and geographical information, such as buildings. Experiments performed in a real environment are reported using an intense nuclear source. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Adaptation of Dubins Paths for UAV Ground Obstacle Avoidance When Using a Low Cost On-Board GNSS Sensor
Sensors 2017, 17(10), 2223; https://doi.org/10.3390/s17102223 - 28 Sep 2017
Cited by 7
Abstract
Current research on Unmanned Aerial Vehicles (UAVs) shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal [...] Read more.
Current research on Unmanned Aerial Vehicles (UAVs) shows a lot of interest in autonomous UAV navigation. This interest is mainly driven by the necessity to meet the rules and restrictions for small UAV flights that are issued by various international and national legal organizations. In order to lower these restrictions, new levels of automation and flight safety must be reached. In this paper, a new method for ground obstacle avoidance derived by using UAV navigation based on the Dubins paths algorithm is presented. The accuracy of the proposed method has been tested, and research results have been obtained by using Software-in-the-Loop (SITL) simulation and real UAV flights, with the measurements done with a low cost Global Navigation Satellite System (GNSS) sensor. All tests were carried out in a three-dimensional space, but the height accuracy was not assessed. The GNSS navigation data for the ground obstacle avoidance algorithm is evaluated statistically. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Towards a Transferable UAV-Based Framework for River Hydromorphological Characterization
Sensors 2017, 17(10), 2210; https://doi.org/10.3390/s17102210 - 26 Sep 2017
Cited by 6
Abstract
The multiple protocols that have been developed to characterize river hydromorphology, partly in response to legislative drivers such as the European Union Water Framework Directive (EU WFD), make the comparison of results obtained in different countries challenging. Recent studies have analyzed the comparability [...] Read more.
The multiple protocols that have been developed to characterize river hydromorphology, partly in response to legislative drivers such as the European Union Water Framework Directive (EU WFD), make the comparison of results obtained in different countries challenging. Recent studies have analyzed the comparability of existing methods, with remote sensing based approaches being proposed as a potential means of harmonizing hydromorphological characterization protocols. However, the resolution achieved by remote sensing products may not be sufficient to assess some of the key hydromorphological features that are required to allow an accurate characterization. Methodologies based on high resolution aerial photography taken from Unmanned Aerial Vehicles (UAVs) have been proposed by several authors as potential approaches to overcome these limitations. Here, we explore the applicability of an existing UAV based framework for hydromorphological characterization to three different fluvial settings representing some of the distinct ecoregions defined by the WFD geographical intercalibration groups (GIGs). The framework is based on the automated recognition of hydromorphological features via tested and validated Artificial Neural Networks (ANNs). Results show that the framework is transferable to the Central-Baltic and Mediterranean GIGs with accuracies in feature identification above 70%. Accuracies of 50% are achieved when the framework is implemented in the Very Large Rivers GIG. The framework successfully identified vegetation, deep water, shallow water, riffles, side bars and shadows for the majority of the reaches. However, further algorithm development is required to ensure a wider range of features (e.g., chutes, structures and erosion) are accurately identified. This study also highlights the need to develop an objective and fit for purpose hydromorphological characterization framework to be adopted within all EU member states to facilitate comparison of results. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Secure Utilization of Beacons and UAVs in Emergency Response Systems for Building Fire Hazard
Sensors 2017, 17(10), 2200; https://doi.org/10.3390/s17102200 - 25 Sep 2017
Cited by 3
Abstract
An intelligent emergency system for hazard monitoring and building evacuation is a very important application area in Internet of Things (IoT) technology. Through the use of smart sensors, such a system can provide more vital and reliable information to first-responders and also reduce [...] Read more.
An intelligent emergency system for hazard monitoring and building evacuation is a very important application area in Internet of Things (IoT) technology. Through the use of smart sensors, such a system can provide more vital and reliable information to first-responders and also reduce the incidents of false alarms. Several smart monitoring and warning systems do already exist, though they exhibit key weaknesses such as a limited monitoring coverage and security, which have not yet been sufficiently addressed. In this paper, we propose a monitoring and emergency response method for buildings by utilizing beacons and Unmanned Aerial Vehicles (UAVs) on an IoT security platform. In order to demonstrate the practicability of our method, we also implement a proof of concept prototype, which we call the UAV-EMOR (UAV-assisted Emergency Monitoring and Response) system. Our UAV-EMOR system provides the following novel features: (1) secure communications between UAVs, smart sensors, the control server and a smartphone app for security managers; (2) enhanced coordination between smart sensors and indoor/outdoor UAVs to expand real-time monitoring coverage; and (3) beacon-aided rescue and building evacuation. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Towards the Automatic Detection of Pre-Existing Termite Mounds through UAS and Hyperspectral Imagery
Sensors 2017, 17(10), 2196; https://doi.org/10.3390/s17102196 - 24 Sep 2017
Cited by 4
Abstract
The increased technological developments in Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence and Machine Learning (ML) approaches have opened the possibility of remote sensing of extensive areas of arid lands. In this paper, a novel approach towards the detection of termite mounds [...] Read more.
The increased technological developments in Unmanned Aerial Vehicles (UAVs) combined with artificial intelligence and Machine Learning (ML) approaches have opened the possibility of remote sensing of extensive areas of arid lands. In this paper, a novel approach towards the detection of termite mounds with the use of a UAV, hyperspectral imagery, ML and digital image processing is intended. A new pipeline process is proposed to detect termite mounds automatically and to reduce, consequently, detection times. For the classification stage, several ML classification algorithms’ outcomes were studied, selecting support vector machines as the best approach for their role in image classification of pre-existing termite mounds. Various test conditions were applied to the proposed algorithm, obtaining an overall accuracy of 68%. Images with satisfactory mound detection proved that the method is “resolution-dependent”. These mounds were detected regardless of their rotation and position in the aerial image. However, image distortion reduced the number of detected mounds due to the inclusion of a shape analysis method in the object detection phase, and image resolution is still determinant to obtain accurate results. Hyperspectral imagery demonstrated better capabilities to classify a huge set of materials than implementing traditional segmentation methods on RGB images only. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Uncooled Thermal Camera Calibration and Optimization of the Photogrammetry Process for UAV Applications in Agriculture
Sensors 2017, 17(10), 2173; https://doi.org/10.3390/s17102173 - 23 Sep 2017
Cited by 25
Abstract
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to [...] Read more.
The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Curvature Continuous and Bounded Path Planning for Fixed-Wing UAVs
Sensors 2017, 17(9), 2155; https://doi.org/10.3390/s17092155 - 19 Sep 2017
Cited by 5
Abstract
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to [...] Read more.
Unmanned Aerial Vehicles (UAVs) play an important role in applications such as data collection and target reconnaissance. An accurate and optimal path can effectively increase the mission success rate in the case of small UAVs. Although path planning for UAVs is similar to that for traditional mobile robots, the special kinematic characteristics of UAVs (such as their minimum turning radius) have not been taken into account in previous studies. In this paper, we propose a locally-adjustable, continuous-curvature, bounded path-planning algorithm for fixed-wing UAVs. To deal with the curvature discontinuity problem, an optimal interpolation algorithm and a key-point shift algorithm are proposed based on the derivation of a curvature continuity condition. To meet the upper bound for curvature and to render the curvature extrema controllable, a local replanning scheme is designed by combining arcs and Bezier curves with monotonic curvature. In particular, a path transition mechanism is built for the replanning phase using minimum curvature circles for a planning philosophy. Numerical results demonstrate that the analytical planning algorithm can effectively generate continuous-curvature paths, while satisfying the curvature upper bound constraint and allowing UAVs to pass through all predefined waypoints in the desired mission region. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Spatial Scale Gap Filling Using an Unmanned Aerial System: A Statistical Downscaling Method for Applications in Precision Agriculture
Sensors 2017, 17(9), 2106; https://doi.org/10.3390/s17092106 - 14 Sep 2017
Cited by 4
Abstract
Applications of satellite-borne observations in precision agriculture (PA) are often limited due to the coarse spatial resolution of satellite imagery. This paper uses high-resolution airborne observations to increase the spatial resolution of satellite data for related applications in PA. A new variational downscaling [...] Read more.
Applications of satellite-borne observations in precision agriculture (PA) are often limited due to the coarse spatial resolution of satellite imagery. This paper uses high-resolution airborne observations to increase the spatial resolution of satellite data for related applications in PA. A new variational downscaling scheme is presented that uses coincident aerial imagery products from “AggieAir”, an unmanned aerial system, to increase the spatial resolution of Landsat satellite data. This approach is primarily tested for downscaling individual band Landsat images that can be used to derive normalized difference vegetation index (NDVI) and surface soil moisture (SSM). Quantitative and qualitative results demonstrate promising capabilities of the downscaling approach enabling effective increase of the spatial resolution of Landsat imageries by orders of 2 to 4. Specifically, the downscaling scheme retrieved the missing high-resolution feature of the imageries and reduced the root mean squared error by 15, 11, and 10 percent in visual, near infrared, and thermal infrared bands, respectively. This metric is reduced by 9% in the derived NDVI and remains negligibly for the soil moisture products. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching
Sensors 2017, 17(9), 2060; https://doi.org/10.3390/s17092060 - 08 Sep 2017
Cited by 5
Abstract
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching [...] Read more.
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Accuracy Analysis of a Dam Model from Drone Surveys
Sensors 2017, 17(8), 1777; https://doi.org/10.3390/s17081777 - 03 Aug 2017
Cited by 10
Abstract
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of [...] Read more.
This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Olive Actual “on Year” Yield Forecast Tool Based on the Tree Canopy Geometry Using UAS Imagery
Sensors 2017, 17(8), 1743; https://doi.org/10.3390/s17081743 - 30 Jul 2017
Cited by 4
Abstract
Olive has a notable importance in countries of Mediterranean basin and its profitability depends on several factors such as actual yield, production cost or product price. Actual “on year” Yield (AY) is production (kg tree−1) in “on years”, and this research [...] Read more.
Olive has a notable importance in countries of Mediterranean basin and its profitability depends on several factors such as actual yield, production cost or product price. Actual “on year” Yield (AY) is production (kg tree−1) in “on years”, and this research attempts to relate it with geometrical parameters of the tree canopy. Regression equation to forecast AY based on manual canopy volume was determined based on data acquired from different orchard categories and cultivars during different harvesting seasons in southern Spain. Orthoimages were acquired with unmanned aerial systems (UAS) imagery calculating individual crown for relating to canopy volume and AY. Yield levels did not vary between orchard categories; however, it did between irrigated orchards (7000–17,000 kg ha−1) and rainfed ones (4000–7000 kg ha−1). After that, manual canopy volume was related with the individual crown area of trees that were calculated by orthoimages acquired with UAS imagery. Finally, AY was forecasted using both manual canopy volume and individual tree crown area as main factors for olive productivity. AY forecast only by using individual crown area made it possible to get a simple and cheap forecast tool for a wide range of olive orchards. Finally, the acquired information was introduced in a thematic map describing spatial AY variability obtained from orthoimage analysis that may be a powerful tool for farmers, insurance systems, market forecasts or to detect agronomical problems. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Seamline Determination Based on PKGC Segmentation for Remote Sensing Image Mosaicking
Sensors 2017, 17(8), 1721; https://doi.org/10.3390/s17081721 - 27 Jul 2017
Cited by 2
Abstract
This paper presents a novel method of seamline determination for remote sensing image mosaicking. A two-level optimization strategy is applied to determine the seamline. Object-level optimization is executed firstly. Background regions (BRs) and obvious regions (ORs) are extracted based on the results of [...] Read more.
This paper presents a novel method of seamline determination for remote sensing image mosaicking. A two-level optimization strategy is applied to determine the seamline. Object-level optimization is executed firstly. Background regions (BRs) and obvious regions (ORs) are extracted based on the results of parametric kernel graph cuts (PKGC) segmentation. The global cost map which consists of color difference, a multi-scale morphological gradient (MSMG) constraint, and texture difference is weighted by BRs. Finally, the seamline is determined in the weighted cost from the start point to the end point. Dijkstra’s shortest path algorithm is adopted for pixel-level optimization to determine the positions of seamline. Meanwhile, a new seamline optimization strategy is proposed for image mosaicking with multi-image overlapping regions. The experimental results show the better performance than the conventional method based on mean-shift segmentation. Seamlines based on the proposed method bypass the obvious objects and take less time in execution. This new method is efficient and superior for seamline determination in remote sensing image mosaicking. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles
Sensors 2017, 17(7), 1646; https://doi.org/10.3390/s17071646 - 18 Jul 2017
Cited by 3
Abstract
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a [...] Read more.
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Vicarious Calibration of sUAS Microbolometer Temperature Imagery for Estimation of Radiometric Land Surface Temperature
Sensors 2017, 17(7), 1499; https://doi.org/10.3390/s17071499 - 26 Jun 2017
Cited by 12
Abstract
In recent years, the availability of lightweight microbolometer thermal cameras compatible with small unmanned aerial systems (sUAS) has allowed their use in diverse scientific and management activities that require sub-meter pixel resolution. Nevertheless, as with sensors already used in temperature remote sensing (e.g., [...] Read more.
In recent years, the availability of lightweight microbolometer thermal cameras compatible with small unmanned aerial systems (sUAS) has allowed their use in diverse scientific and management activities that require sub-meter pixel resolution. Nevertheless, as with sensors already used in temperature remote sensing (e.g., Landsat satellites), a radiance atmospheric correction is necessary to estimate land surface temperature. This is because atmospheric conditions at any sUAS flight elevation will have an adverse impact on the image accuracy, derived calculations, and study replicability using the microbolometer technology. This study presents a vicarious calibration methodology (sUAS-specific, time-specific, flight-specific, and sensor-specific) for sUAS temperature imagery traceable back to NIST-standards and current atmospheric correction methods. For this methodology, a three-year data collection campaign with a sUAS called “AggieAir”, developed at Utah State University, was performed for vineyards near Lodi, California, for flights conducted at different times (early morning, Landsat overpass, and mid-afternoon”) and seasonal conditions. From the results of this study, it was found that, despite the spectral response of microbolometer cameras (7.0 to 14.0 μm), it was possible to account for the effects of atmospheric and sUAS operational conditions, regardless of time and weather, to acquire accurate surface temperature data. In addition, it was found that the main atmospheric correction parameters (transmissivity and atmospheric radiance) significantly varied over the course of a day. These parameters fluctuated the most in early morning and partially stabilized in Landsat overpass and in mid-afternoon times. In terms of accuracy, estimated atmospheric correction parameters presented adequate statistics (confidence bounds under ±0.1 for transmissivity and ±1.2 W/m2/sr/um for atmospheric radiance, with a range of RMSE below 1.0 W/m2/sr/um) for all sUAS flights. Differences in estimated temperatures between original thermal image and the vicarious calibration procedure reported here were estimated from −5 °C to 10 °C for early morning, and from 0 to 20 °C for Landsat overpass and mid-afternoon times. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach
Sensors 2017, 17(6), 1437; https://doi.org/10.3390/s17061437 - 19 Jun 2017
Cited by 11
Abstract
[-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, [...] Read more.
[-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
The Development of an Open Hardware and Software System Onboard Unmanned Aerial Vehicles to Monitor Concentrated Solar Power Plants
Sensors 2017, 17(6), 1329; https://doi.org/10.3390/s17061329 - 08 Jun 2017
Cited by 2
Abstract
Concentrated solar power (CSP) plants are increasingly gaining interest as a source of renewable energy. These plants face several technical problems and the inspection of components such as absorber tubes in parabolic trough concentrators (PTC), which are widely deployed, is necessary to guarantee [...] Read more.
Concentrated solar power (CSP) plants are increasingly gaining interest as a source of renewable energy. These plants face several technical problems and the inspection of components such as absorber tubes in parabolic trough concentrators (PTC), which are widely deployed, is necessary to guarantee plant efficiency. This article presents a system for real-time industrial inspection of CSP plants using low-cost, open-source components in conjunction with a thermographic sensor and an unmanned aerial vehicle (UAV). The system, available in open-source hardware and software, is designed to be employed independently of the type of device used for inspection (laptop, smartphone, tablet or smartglasses) and its operating system. Several UAV flight missions were programmed as follows: flight altitudes at 20, 40, 60, 80, 100 and 120 m above ground level; and three cruising speeds: 5, 7 and 10 m/s. These settings were chosen and analyzed in order to optimize inspection time. The results indicate that it is possible to perform inspections by an UAV in real time at CSP plants as a means of detecting anomalous absorber tubes and improving the effectiveness of methodologies currently being utilized. Moreover, aside from thermographic sensors, this contribution can be applied to other sensors and can be used in a broad range of applications where real-time georeferenced data visualization is necessary. Full article
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications) Printed Edition available
Show Figures

Figure 1

Back to TopTop