sensors-logo

Journal Browser

Journal Browser

Special Issue "UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (30 June 2020).

Special Issue Editors

Dr. Felipe Gonzalez Toro
Website
Guest Editor
School of Electrical Engineering and Computer Science, Australian Research Center for Aerospace Automation (ARCAA), Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia
Interests: UAV, gas sensors, imaging sensors; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection, sense and avoid
Special Issues and Collections in MDPI journals
Prof. Dr. Antonios Tsourdos
Website
Guest Editor
Cranfield University, School of Aerospace, Transport and Manufacturing, College Road, Cranfield MK43 0AL, UK
Interests: instrumentation; sensors and measurement science; autonomous systems; system engineering; vehicle health management; mechatronics and advanced controls
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data have enabled a rapid uptake of UAVs and drones across several industries and application domains.

The scope of this issue provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single and multiple UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS enabled and more broadly Global Navigation Satellite System (GNSS) enabled and GPS /GNSS denied environments.

Prospective authors are invited to contribute to this Special Issue of Sensors (Impact Factor: 2.475 (2017); 5-Year Impact Factor: 3.014 (2017)) by submitting an original manuscript.

Contributions may focus on, but are not limited to:

  • UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;
  • UAV sensor applications, spatial ecology, pest detection, reef, forestry, volcanology, precision agriculture wildlife species tracking, search and rescue, target tracking, the monitoring of the atmosphere, chemical, biological, and natural disaster phenomena, fire prevention, flood prevention, volcanic monitoring, pollution monitoring, microclimates, and land use;
  • Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;
  • UAV-based change detection;
  • Sense and avoid;
  • UAV navigation in cluttered environments;
  • Single and multiple UAVs or swarms in GPS /GNSS denied environments;
  • On-board decision making;
  • Real-time georeferencing for UAV-based imaging;
  • Radiometric and spectral calibration of UAV-based sensors;
  • UAV onboard data storage, transmission, and retrieval;
  • Collaborative strategies and architectures to control multiple UAVs and sensor networks for the purpose of remote sensing.

Dr. Felipe Gonzalez Toro
Prof. Dr. Antonios Tsourdos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Scalable Distributed State Estimation in UTM Context
Sensors 2020, 20(9), 2682; https://doi.org/10.3390/s20092682 - 08 May 2020
Abstract
This article proposes a novel approach to the Distributed State Estimation (DSE) problem for a set of co-operating UAVs equipped with heterogeneous on board sensors capable of exploiting certain characteristics typical of the UAS Traffic Management (UTM) context, such as high traffic density [...] Read more.
This article proposes a novel approach to the Distributed State Estimation (DSE) problem for a set of co-operating UAVs equipped with heterogeneous on board sensors capable of exploiting certain characteristics typical of the UAS Traffic Management (UTM) context, such as high traffic density and the presence of limited range, Vehicle-to-Vehicle communication devices. The proposed algorithm is based on a scalable decentralized Kalman Filter derived from the Internodal Transformation Theory enhanced on the basis of the Consensus Theory. The general benefit of the proposed algorithm consists of, on the one hand, reducing the estimation problem to smaller local sub-problems, through a self-organization process of the local estimating nodes in response to the time varying communication topology; and on the other hand, of exploiting measures carried out nearby in order to improve the accuracy of the local estimates. In the UTM context, this enables each vehicle to estimate both its own position and velocity, as well as those of the neighboring vehicles, using both on board measurements and information transmitted by neighboring vehicles. A numerical simulation in a simplified UTM scenario is presented, in order to illustrate the salient aspects of the proposed algorithm. Full article
Show Figures

Figure 1

Open AccessArticle
Evaluation of the Georeferencing Accuracy of a Photogrammetric Model Using a Quadrocopter with Onboard GNSS RTK
Sensors 2020, 20(8), 2318; https://doi.org/10.3390/s20082318 - 18 Apr 2020
Abstract
Using a GNSS RTK (Global Navigation Satellite System Real Time Kinematic) -equipped unmanned aerial vehicle (UAV) could greatly simplify the construction of highly accurate digital models through SfM (Structure from Motion) photogrammetry, possibly even avoiding the need for ground control points (GCPs). As [...] Read more.
Using a GNSS RTK (Global Navigation Satellite System Real Time Kinematic) -equipped unmanned aerial vehicle (UAV) could greatly simplify the construction of highly accurate digital models through SfM (Structure from Motion) photogrammetry, possibly even avoiding the need for ground control points (GCPs). As previous studies on this topic were mostly performed using fixed-wing UAVs, this study aimed to investigate the results achievable by a quadrocopter (DJI Phantom 4 RTK). Three image acquisition flights were performed for two sites of a different character (urban and rural) along with three calculation variants for each flight: georeferencing using ground-surveyed GCPs only, onboard GNSS RTK only, and a combination thereof. The combined and GNSS RTK methods provided the best results (at the expected level of accuracy of 1–2 GSD (Ground Sample Distance)) for both the vertical and horizontal components. The horizontal positioning was also accurate when georeferencing directly based on the onboard GNSS RTK; the vertical component, however, can be (especially where the terrain is difficult for SfM evaluation) burdened with relatively high systematic errors. This problem was caused by the incorrect identification of the interior orientation parameters calculated, as is customary for non-metric cameras, together with bundle adjustment. This problem could be resolved by using a small number of GCPs (at least one) or quality camera pre-calibration. Full article
Show Figures

Figure 1

Open AccessArticle
Honeycomb Map: A Bioinspired Topological Map for Indoor Search and Rescue Unmanned Aerial Vehicles
Sensors 2020, 20(3), 907; https://doi.org/10.3390/s20030907 - 08 Feb 2020
Cited by 1
Abstract
The use of robots to map disaster-stricken environments can prevent rescuers from being harmed when exploring an unknown space. In addition, mapping a multi-robot environment can help these teams plan their actions with prior knowledge. The present work proposes the use of multiple [...] Read more.
The use of robots to map disaster-stricken environments can prevent rescuers from being harmed when exploring an unknown space. In addition, mapping a multi-robot environment can help these teams plan their actions with prior knowledge. The present work proposes the use of multiple unmanned aerial vehicles (UAVs) in the construction of a topological map inspired by the way that bees build their hives. A UAV can map a honeycomb only if it is adjacent to a known one. Different metrics to choose the honeycomb to be explored were applied. At the same time, as UAVs scan honeycomb adjacencies, RGB-D and thermal sensors capture other data types, and then generate a 3D view of the space and images of spaces where there may be fire spots, respectively. Simulations in different environments showed that the choice of metric and variation in the number of UAVs influence the number of performed displacements in the environment, consequently affecting exploration time and energy use. Full article
Show Figures

Figure 1

Open AccessArticle
UAV Autonomous Localization Using Macro-Features Matching with a CAD Model
Sensors 2020, 20(3), 743; https://doi.org/10.3390/s20030743 - 29 Jan 2020
Abstract
Research in the field of autonomous Unmanned Aerial Vehicles (UAVs) has significantly advanced in recent years, mainly due to their relevance in a large variety of commercial, industrial, and military applications. However, UAV navigation in GPS-denied environments continues to be a challenging problem [...] Read more.
Research in the field of autonomous Unmanned Aerial Vehicles (UAVs) has significantly advanced in recent years, mainly due to their relevance in a large variety of commercial, industrial, and military applications. However, UAV navigation in GPS-denied environments continues to be a challenging problem that has been tackled in recent research through sensor-based approaches. This paper presents a novel offline, portable, real-time in-door UAV localization technique that relies on macro-feature detection and matching. The proposed system leverages the support of machine learning, traditional computer vision techniques, and pre-existing knowledge of the environment. The main contribution of this work is the real-time creation of a macro-feature description vector from the UAV captured images which are simultaneously matched with an offline pre-existing vector from a Computer-Aided Design (CAD) model. This results in a quick UAV localization within the CAD model. The effectiveness and accuracy of the proposed system were evaluated through simulations and experimental prototype implementation. Final results reveal the algorithm’s low computational burden as well as its ease of deployment in GPS-denied environments. Full article
Show Figures

Figure 1

Open AccessArticle
A Framework for Multiple Ground Target Finding and Inspection Using a Multirotor UAS
Sensors 2020, 20(1), 272; https://doi.org/10.3390/s20010272 - 03 Jan 2020
Abstract
Small unmanned aerial systems (UASs) now have advanced waypoint-based navigation capabilities, which enable them to collect surveillance, wildlife ecology and air quality data in new ways. The ability to remotely sense and find a set of targets and descend and hover close to [...] Read more.
Small unmanned aerial systems (UASs) now have advanced waypoint-based navigation capabilities, which enable them to collect surveillance, wildlife ecology and air quality data in new ways. The ability to remotely sense and find a set of targets and descend and hover close to each target for an action is desirable in many applications, including inspection, search and rescue and spot spraying in agriculture. This paper proposes a robust framework for vision-based ground target finding and action using the high-level decision-making approach of Observe, Orient, Decide and Act (OODA). The proposed framework was implemented as a modular software system using the robotic operating system (ROS). The framework can be effectively deployed in different applications where single or multiple target detection and action is needed. The accuracy and precision of camera-based target position estimation from a low-cost UAS is not adequate for the task due to errors and uncertainties in low-cost sensors, sensor drift and target detection errors. External disturbances such as wind also pose further challenges. The implemented framework was tested using two different test cases. Overall, the results show that the proposed framework is robust to localization and target detection errors and able to perform the task. Full article
Show Figures

Figure 1

Open AccessArticle
Assessment of DSMs Using Backpack-Mounted Systems and Drone Techniques to Characterise Ancient Underground Cellars in the Duero Basin (Spain)
Sensors 2019, 19(24), 5352; https://doi.org/10.3390/s19245352 - 04 Dec 2019
Abstract
In this study, a backpack-mounted 3D mobile scanning system and a fixed-wing drone (UAV) have been used to register terrain data on the same space. The study area is part of the ancient underground cellars in the Duero Basin. The aim of this [...] Read more.
In this study, a backpack-mounted 3D mobile scanning system and a fixed-wing drone (UAV) have been used to register terrain data on the same space. The study area is part of the ancient underground cellars in the Duero Basin. The aim of this work is to characterise the state of the roofs of these wine cellars by obtaining digital surface models (DSM) using the previously mentioned systems to detect any possible cases of collapse, using four geomatic products obtained with these systems. The results obtained from the process offer sufficient quality to generate valid DSMs in the study area or in a similar area. One limitation of the DSMs generated by backpack MMS is that the outcome depends on the distance of the points to the axis of the track and on the irregularities in the terrain. Specific parameters have been studied, such as the measuring distance from the scanning point in the laser scanner, the angle of incidence with regard to the ground, the surface vegetation, and any irregularities in the terrain. The registration speed and the high definition of the terrain offered by these systems produce a model that can be used to select the correct conservation priorities for this unique space. Full article
Show Figures

Figure 1

Open AccessArticle
Multi-Variant Accuracy Evaluation of UAV Imaging Surveys: A Case Study on Investment Area
Sensors 2019, 19(23), 5229; https://doi.org/10.3390/s19235229 - 28 Nov 2019
Abstract
The main focus of the presented study is a multi-variant accuracy assessment of a photogrammetric 2D and 3D data collection, whose accuracy meets the appropriate technical requirements, based on the block of 858 digital images (4.6 cm ground sample distance) acquired by Trimble [...] Read more.
The main focus of the presented study is a multi-variant accuracy assessment of a photogrammetric 2D and 3D data collection, whose accuracy meets the appropriate technical requirements, based on the block of 858 digital images (4.6 cm ground sample distance) acquired by Trimble® UX5 unmanned aircraft system equipped with Sony NEX-5T compact system camera. All 1418 well-defined ground control and check points were a posteriori measured applying Global Navigation Satellite Systems (GNSS) using the real-time network method. High accuracy of photogrammetric products was obtained by the computations performed according to the proposed methodology, which assumes multi-variant images processing and extended error analysis. The detection of blurred images was preprocessed applying Laplacian operator and Fourier transform implemented in Python using the Open Source Computer Vision library. The data collection was performed in Pix4Dmapper suite supported by additional software: in the bundle block adjustment (results verified using RealityCapure and PhotoScan applications), on the digital surface model (CloudCompare), and georeferenced orthomosaic in GeoTIFF format (AutoCAD Civil 3D). The study proved the high accuracy and significant statistical reliability of unmanned aerial vehicle (UAV) imaging 2D and 3D surveys. The accuracy fulfills Polish and US technical requirements of planimetric and vertical accuracy (root mean square error less than or equal to 0.10 m and 0.05 m). Full article
Show Figures

Figure 1

Open AccessArticle
A Monocular SLAM-based Controller for Multirotors with Sensor Faults under Ground Effect
Sensors 2019, 19(22), 4948; https://doi.org/10.3390/s19224948 - 13 Nov 2019
Abstract
Multirotor micro air vehicles can operate in complex and confined environments that are otherwise inaccessible to larger drones. Operation in such environments results in airflow interactions between the propellers and proximate surfaces. The most common of these interactions is the ground effect. In [...] Read more.
Multirotor micro air vehicles can operate in complex and confined environments that are otherwise inaccessible to larger drones. Operation in such environments results in airflow interactions between the propellers and proximate surfaces. The most common of these interactions is the ground effect. In addition to the increment in thrust efficiency, this effect disturbs the onboard sensors of the drone. In this paper, we present a fault-tolerant scheme for a multirotor with altitude sensor faults caused by the ground effect. We assume a hierarchical control structure for trajectory tracking. The structure consists of an external Proportional-Derivative controller and an internal Proportional-Integral controller. We consider that the sensor faults occur on the inner loop and counteract them in the outer loop. In a novel approach, we use a metric monocular Simultaneous Localization and Mapping algorithm for detecting internal faults. We design the fault diagnosis scheme as a logical process which depends on the weighted residual. Furthermore, we propose two control strategies for fault mitigation. The first combines the external PD controller and a function of the residual. The second treats the sensor fault as an actuator fault and compensates with a sliding mode action. In either case, we utilize onboard sensors only. Finally, we evaluate the effectiveness of the strategies in simulations and experiments. Full article
Show Figures

Figure 1

Open AccessArticle
Automatic Change Detection System over Unmanned Aerial Vehicle Video Sequences Based on Convolutional Neural Networks
Sensors 2019, 19(20), 4484; https://doi.org/10.3390/s19204484 - 16 Oct 2019
Cited by 3
Abstract
In recent years, the use of unmanned aerial vehicles (UAVs) for surveillance tasks has increased considerably. This technology provides a versatile and innovative approach to the field. However, the automation of tasks such as object recognition or change detection usually requires image processing [...] Read more.
In recent years, the use of unmanned aerial vehicles (UAVs) for surveillance tasks has increased considerably. This technology provides a versatile and innovative approach to the field. However, the automation of tasks such as object recognition or change detection usually requires image processing techniques. In this paper we present a system for change detection in video sequences acquired by moving cameras. It is based on the combination of image alignment techniques with a deep learning model based on convolutional neural networks (CNNs). This approach covers two important topics. Firstly, the capability of our system to be adaptable to variations in the UAV flight. In particular, the difference of height between flights, and a slight modification of the camera’s position or movement of the UAV because of natural conditions such as the effect of wind. These modifications can be produced by multiple factors, such as weather conditions, security requirements or human errors. Secondly, the precision of our model to detect changes in diverse environments, which has been compared with state-of-the-art methods in change detection. This has been measured using the Change Detection 2014 dataset, which provides a selection of labelled images from different scenarios for training change detection algorithms. We have used images from dynamic background, intermittent object motion and bad weather sections. These sections have been selected to test our algorithm’s robustness to changes in the background, as in real flight conditions. Our system provides a precise solution for these scenarios, as the mean F-measure score from the image analysis surpasses 97%, and a significant precision in the intermittent object motion category, where the score is above 99%. Full article
Show Figures

Figure 1

Open AccessArticle
UAV Flight and Landing Guidance System for Emergency Situations
Sensors 2019, 19(20), 4468; https://doi.org/10.3390/s19204468 - 15 Oct 2019
Cited by 1
Abstract
Unmanned aerial vehicles (UAVs) with high mobility can perform various roles such as delivering goods, collecting information, recording videos and more. However, there are many elements in the city that disturb the flight of the UAVs, such as various obstacles and urban canyons [...] Read more.
Unmanned aerial vehicles (UAVs) with high mobility can perform various roles such as delivering goods, collecting information, recording videos and more. However, there are many elements in the city that disturb the flight of the UAVs, such as various obstacles and urban canyons which can cause a multi-path effect of GPS signals, which degrades the accuracy of GPS-based localization. In order to empower the safety of the UAVs flying in urban areas, UAVs should be guided to a safe area even in a GPS-denied or network-disconnected environment. Also, UAVs must be able to avoid obstacles while landing in an urban area. For this purpose, we present the UAV detour system for operating UAV in an urban area. The UAV detour system includes a highly reliable laser guidance system to guide the UAVs to a point where they can land, and optical flow magnitude map to avoid obstacles for a safe landing. Full article
Show Figures

Figure 1

Open AccessArticle
Airborne Visual Detection and Tracking of Cooperative UAVs Exploiting Deep Learning
Sensors 2019, 19(19), 4332; https://doi.org/10.3390/s19194332 - 07 Oct 2019
Cited by 5
Abstract
The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated [...] Read more.
The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated manner (swarms). In this respect, visual cameras represent a key instrument to enable coordination by giving each UAV the capability to visually monitor the other members of the formation. Hence, a related technological challenge is the development of robust solutions to detect and track cooperative targets through a sequence of frames. In this framework, this paper proposes an innovative approach to carry out this task based on deep learning. Specifically, the You Only Look Once (YOLO) object detection system is integrated within an original processing architecture in which the machine-vision algorithms are aided by navigation hints available thanks to the cooperative nature of the formation. An experimental flight test campaign, involving formations of two multirotor UAVs, is conducted to collect a database of images suitable to assess the performance of the proposed approach. Results demonstrate high-level accuracy, and robustness against challenging conditions in terms of illumination, background and target-range variability. Full article
Show Figures

Figure 1

Open AccessArticle
Evaluating Water Level Changes at Different Tidal Phases Using UAV Photogrammetry and GNSS Vertical Data
Sensors 2019, 19(17), 3778; https://doi.org/10.3390/s19173778 - 31 Aug 2019
Cited by 2
Abstract
Evaluating water level changes at intertidal zones is complicated because of dynamic tidal inundation. However, water level changes during different tidal phases could be evaluated using a digital surface model (DSM) captured by unmanned aerial vehicle (UAV) with higher vertical accuracy provided by [...] Read more.
Evaluating water level changes at intertidal zones is complicated because of dynamic tidal inundation. However, water level changes during different tidal phases could be evaluated using a digital surface model (DSM) captured by unmanned aerial vehicle (UAV) with higher vertical accuracy provided by a Global Navigation Satellite System (GNSS). Image acquisition using a multirotor UAV and vertical data collection from GNSS survey were conducted at Kilim River, Langkawi Island, Kedah, Malaysia during two different tidal phases, at high and low tides. Using the Structure from Motion (SFM) algorithm, a DSM and orthomosaics were produced as the main sources of data analysis. GNSS provided horizontal and vertical geo-referencing for both the DSM and orthomosaics during post-processing after field observation at the study area. The DSM vertical accuracy against the tidal data from a tide gauge was about 12.6 cm (0.126 m) for high tide and 34.5 cm (0.345 m) for low tide. Hence, the vertical accuracy of the DSM height is still within a tolerance of ±0.5 m (with GNSS positioning data). These results open new opportunities to explore more validation methods for water level changes using various aerial platforms besides Light Detection and Ranging (LiDAR) and tidal data in the future. Full article
Show Figures

Figure 1

Open AccessArticle
Towards Automatic UAS-Based Snow-Field Monitoring for Microclimate Research
Sensors 2019, 19(8), 1945; https://doi.org/10.3390/s19081945 - 25 Apr 2019
Cited by 1
Abstract
This article presents unmanned aerial system (UAS)-based photogrammetry as an efficient method for the estimation of snow-field parameters, including snow depth, volume, and snow-covered area. Unlike similar studies employing UASs, this method benefits from the rapid development of compact, high-accuracy global navigation satellite [...] Read more.
This article presents unmanned aerial system (UAS)-based photogrammetry as an efficient method for the estimation of snow-field parameters, including snow depth, volume, and snow-covered area. Unlike similar studies employing UASs, this method benefits from the rapid development of compact, high-accuracy global navigation satellite system (GNSS) receivers. Our custom-built, multi-sensor system for UAS photogrammetry facilitates attaining centimeter- to decimeter-level object accuracy without deploying ground control points; this technique is generally known as direct georeferencing. The method was demonstrated at Mapa Republiky, a snow field located in the Krkonose, a mountain range in the Czech Republic. The location has attracted the interest of scientists due to its specific characteristics; multiple approaches to snow-field parameter estimation have thus been employed in that area to date. According to the results achieved within this study, the proposed method can be considered the optimum solution since it not only attains superior density and spatial object accuracy (approximately one decimeter) but also significantly reduces the data collection time and, above all, eliminates field work to markedly reduce the health risks associated with avalanches. Full article
Show Figures

Graphical abstract

Open AccessArticle
UAVs for Structure-From-Motion Coastal Monitoring: A Case Study to Assess the Evolution of Embryo Dunes over a Two-Year Time Frame in the Po River Delta, Italy
Sensors 2019, 19(7), 1717; https://doi.org/10.3390/s19071717 - 10 Apr 2019
Cited by 4
Abstract
Coastal environments are usually characterized by a brittle balance, especially in terms of sediment transportation. The formation of dunes, as well as their sudden destruction as a result of violent storms, affects this balance in a significant way. Moreover, the growth of vegetation [...] Read more.
Coastal environments are usually characterized by a brittle balance, especially in terms of sediment transportation. The formation of dunes, as well as their sudden destruction as a result of violent storms, affects this balance in a significant way. Moreover, the growth of vegetation on the top of the dunes strongly influences the consequent growth of the dunes themselves. This work presents the results obtained through a long-term monitoring of a complex dune system by the use of Unmanned Aerial Vehicles (UAVs). Six different surveys were carried out between November 2015 and December 2017 in the littoral of Rosolina Mare (Italy). Aerial photogrammetric data were acquired during flight repetitions by using a DJI Phantom 3 Professional with the camera in a nadiral arrangement. The processing of the captured images consisted of the reconstruction of a three-dimensional model using the Structure-from-Motion (SfM). Each model was framed in the European Terrestrial Reference System (ETRS) using GNSS geodetic receivers in Network Real Time Kinematic (NRTK). Specific data management was necessary due to the vegetation by filtering the dense cloud. This task was performed by both performing a slope detection and a removal of the residual outliers. The final products of this approach were thus represented by Digital Elevation Models (DEMs) of the sandy coastal section. In addition, DEMs of Difference (DoD) were also computed for the purpose of monitoring over time and detecting variations. The accuracy assessment of the DEMs was carried out by an elevation comparison through especially GNSS-surveyed points. Relevant cross sections were also extracted and compared. The use of the Structure-from-Motion approach by UAVs finally proved to be both reliable and time-saving thanks to quicker in situ operations for the data acquisition and an accurate reconstruction of high-resolution elevation models. The low cost of the system and its flexibility represent additional strengths, making this technique highly competitive with traditional ones. Full article
Show Figures

Figure 1

Open AccessArticle
UAV Landing Based on the Optical Flow Videonavigation
Sensors 2019, 19(6), 1351; https://doi.org/10.3390/s19061351 - 18 Mar 2019
Cited by 2
Abstract
An automatic landing of an unmanned aerial vehicle (UAV) is a non-trivial task requiring a solution of a variety of technical and computational problems. The most important is the precise determination of altitude, especially at the final stage of approaching to the earth. [...] Read more.
An automatic landing of an unmanned aerial vehicle (UAV) is a non-trivial task requiring a solution of a variety of technical and computational problems. The most important is the precise determination of altitude, especially at the final stage of approaching to the earth. With current altimeters, the magnitude of measurement errors at the final phase of the descent may be unacceptably high for constructing an algorithm for controlling the landing manoeuvre. Therefore, it is desirable to have an additional sensor, which makes possible to estimate the height above the surface of the runway. It is possible to estimate all linear and angular UAV velocities simultaneously with the help of so-called optical flow (OF), determined by the sequence of images recorded by an onboard camera, however in pixel scale. To transform them into the real metrical values it is necessary to know the current flight altitude and the camera angular position values. The critical feature of the OF is its susceptibility to the camera resolution and the shift rate of the observed scene. During the descent phase of flight, these parameters change at least one hundred times together with the altitude. Therefore, for reliable application of the OF one needs to coordinate the shooting parameters with the current altitude. However, in case of the altimeter fault presence, the altitude is also still to be estimated with the aid of the OF, so one needs to have another tool for the camera control. One of the possible and straightforward ways is the camera resolution change by pixels averaging in computer part which performed in coordination with theoretically estimated and measured OF velocity. The article presents results of such algorithms testing from real video sequences obtained in flights with different approaches to the runway with simultaneous recording of telemetry and video data. Full article
Show Figures

Figure 1

Back to TopTop