Deep Learning for Drones and Its Applications

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: closed (30 April 2020) | Viewed by 19216

Special Issue Editors

Queensland Centre for Advanced Technologies (QCAT), Pullenvale, QLD 4069, Australia
Interests: UAV; robot vision; state estimation; deep learning in agriculture (horticulture); reinforcement learning

E-Mail Website
Guest Editor
Department of Mechanical and Process Engineering, Institute of Robotics and Intelligent Systems, Autonomous Systems Lab., ETH Zurich, J304, Building LEE, Leonhardstrasse 21, 8092 Zurich, Switzerland
Interests: robot vision; SLAM; unmanned aerial vehicles

Special Issue Information

Dear Colleagues,

We are pleased to invite you to submit your papers to the MDPI Drones Special Issue on Deep Learning for Drones and Its Applications.

Drones, especially vertical takeoff and landing platforms (VTOL), are extremely popular and useful for many tasks. The variety of commercially-available VTOL platforms today indicates that they have been leaving the research lab and are being utilized for real-world aerial work, such as vertical structure inspection, construction site survey, and precision agriculture. These platforms offer high-level autonomous functionalities, minimizing user interventions, and can carry the useful payloads required for an application.

In addition, we have witnessed rapid growth in the area of machine learning, especially deep learning. This has demonstrated that state-of-the-art deep learning techniques can already outperform human capabilities in many sophisticated tasks, such as autonomous driving, playing games such GO or Dota2 (reinforcement learning), and even in medical image analysis (object detection and instance segmentation).

Based on the two cutting-edge technologies mentioned above, there exists a growing interest in utilizing deep-learning techniques for aerial robots, in order to improve their capabilities and level of autonomy. This step change will play a pivotal role in both drone technologies and the field of aerial robotics.

Within this context, we thus invite papers focusing on current advances in the area of deep learning for field aerial robots in this Special Issue.

Papers are solicited on all areas directly related to these topics, including but not limited to the following:

  • Large-scale aerial datasets and standardized benchmarks for the training, testing, and evaluation of deep-learning solutions
  • Deep neural networks (DNN) for field aerial robot perception (e.g., object detection, or semantic classification for navigation)
  • Recurrent networks for state estimation and dynamic identification of aerial vehicles
  • Deep-reinforcement learning for aerial robots (discrete-, or continuous-control) in dynamic environments
  • Learning-based aerial manipulation in cluttered environments
  • Decision making or task planning using machine learning for field aerial robots
  • Data analytics and real-time decision making with aerial robots-in-the-loop
  • Aerial robots in agriculture using deep learning
  • Aerial robots in inspection using deep learning
  • Imitation learning for aerial robots (e.g., teach and repeat)
  • Multi aerial-agent coordination using deep learning

Dr. Inkyu Sa
Dr. Alexander Millane
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 8360 KiB  
Article
Deep Learning-Based Damage Detection from Aerial SfM Point Clouds
by Mohammad Ebrahim Mohammadi, Daniel P. Watson and Richard L. Wood
Drones 2019, 3(3), 68; https://doi.org/10.3390/drones3030068 - 27 Aug 2019
Cited by 18 | Viewed by 6465
Abstract
Aerial data collection is well known as an efficient method to study the impact following extreme events. While datasets predominately include images for post-disaster remote sensing analyses, images alone cannot provide detailed geometric information due to a lack of depth or the complexity [...] Read more.
Aerial data collection is well known as an efficient method to study the impact following extreme events. While datasets predominately include images for post-disaster remote sensing analyses, images alone cannot provide detailed geometric information due to a lack of depth or the complexity required to extract geometric details. However, geometric and color information can easily be mined from three-dimensional (3D) point clouds. Scene classification is commonly studied within the field of machine learning, where a workflow follows a pipeline operation to compute a series of engineered features for each point and then points are classified based on these features using a learning algorithm. However, these workflows cannot be directly applied to an aerial 3D point cloud due to a large number of points, density variation, and object appearance. In this study, the point cloud datasets are transferred into a volumetric grid model to be used in the training and testing of 3D fully convolutional network models. The goal of these models is to semantically segment two areas that sustained damage after Hurricane Harvey, which occurred in 2017, into six classes, including damaged structures, undamaged structures, debris, roadways, terrain, and vehicles. These classes are selected to understand the distribution and intensity of the damage. The point clouds consist of two distinct areas assembled using aerial Structure-from-Motion from a camera mounted on an unmanned aerial system. The two datasets contain approximately 5000 and 8000 unique instances, and the developed methods are assessed quantitatively using precision, accuracy, recall, and intersection over union metrics. Full article
(This article belongs to the Special Issue Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 2002 KiB  
Letter
Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images
by Robert Chew, Jay Rineer, Robert Beach, Maggie O’Neil, Noel Ujeneza, Daniel Lapidus, Thomas Miano, Meghan Hegarty-Craver, Jason Polly and Dorota S. Temple
Drones 2020, 4(1), 7; https://doi.org/10.3390/drones4010007 - 26 Feb 2020
Cited by 75 | Viewed by 9599
Abstract
Accurate projections of seasonal agricultural output are essential for improving food security. However, the collection of agricultural information through seasonal agricultural surveys is often not timely enough to inform public and private stakeholders about crop status during the growing season. Acquiring timely and [...] Read more.
Accurate projections of seasonal agricultural output are essential for improving food security. However, the collection of agricultural information through seasonal agricultural surveys is often not timely enough to inform public and private stakeholders about crop status during the growing season. Acquiring timely and accurate crop estimates can be particularly challenging in countries with predominately smallholder farms because of the large number of small plots, intense intercropping, and high diversity of crop types. In this study, we used RGB images collected from unmanned aerial vehicles (UAVs) flown in Rwanda to develop a deep learning algorithm for identifying crop types, specifically bananas, maize, and legumes, which are key strategic food crops in Rwandan agriculture. The model leverages advances in deep convolutional neural networks and transfer learning, employing the VGG16 architecture and the publicly accessible ImageNet dataset for pretraining. The developed model performs with an overall test set F1 of 0.86, with individual classes ranging from 0.49 (legumes) to 0.96 (bananas). Our findings suggest that although certain staple crops such as bananas and maize can be classified at this scale with high accuracy, crops involved in intercropping (legumes) can be difficult to identify consistently. We discuss the potential use cases for the developed model and recommend directions for future research in this area. Full article
(This article belongs to the Special Issue Deep Learning for Drones and Its Applications)
Show Figures

Figure 1

Back to TopTop