Next Article in Journal
Gas Biosensor Arrays Based on Single-Stranded DNA-Functionalized Single-Walled Carbon Nanotubes for the Detection of Volatile Organic Compound Biomarkers Released by Huanglongbing Disease-Infected Citrus Trees
Next Article in Special Issue
Parking Line Based SLAM Approach Using AVM/LiDAR Sensor Fusion for Rapid and Accurate Loop Closing and Parking Space Detection
Previous Article in Journal
Energy Efficiency of a Decode-and-Forward Multiple-Relay Network with Rate Adaptive LDPC Codes
Previous Article in Special Issue
Passing through Open/Closed Doors: A Solution for 3D Scanning Robots
Open AccessArticle

Vision-Based Multirotor Following Using Synthetic Learning Techniques

1
Computer Vision and Aerial Robotics group, Centre for Automation and Robotics, Universidad Politécnica de Madrid (UPM-CSIC), Calle Jose Gutierrez Abascal 2, 28006 Madrid, Spain
2
Artificial Intelligence group, University of Groningen, 9712 Groningen, The Netherlands
3
Aerospace Controls Laboratory, Massachusetts Institute of Technology (MIT), 77Mass. Ave., Cambridge, MA 02139, USA
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(21), 4794; https://doi.org/10.3390/s19214794
Received: 9 August 2019 / Revised: 17 October 2019 / Accepted: 30 October 2019 / Published: 4 November 2019
(This article belongs to the Special Issue Mobile Robot Navigation)
Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights). View Full-Text
Keywords: multirotor; UAV; following; synthetic learning; reinforcement learning; deep learning multirotor; UAV; following; synthetic learning; reinforcement learning; deep learning
Show Figures

Figure 1

MDPI and ACS Style

Rodriguez-Ramos, A.; Alvarez-Fernandez, A.; Bavle, H.; Campoy, P.; How, J.P. Vision-Based Multirotor Following Using Synthetic Learning Techniques. Sensors 2019, 19, 4794.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop