Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (59)

Search Parameters:
Keywords = photogrammetric vision

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 11344 KiB  
Article
A Robust Tool for 3D Rail Mapping Using UAV Data Photogrammetry, AI and CV: qAicedrone-Rail
by Innes Barbero-García, Diego Guerrero-Sevilla, David Sánchez-Jiménez and David Hernández-López
Drones 2025, 9(3), 197; https://doi.org/10.3390/drones9030197 - 10 Mar 2025
Viewed by 1032
Abstract
Rail systems are essential for economic growth and regional connectivity, but aging infrastructures face challenges from increased demand and environmental factors. Traditional inspection methods, such as visual inspections, are inefficient and costly and pose safety risks. Unmanned Aerial Vehicles (UAVs) have become a [...] Read more.
Rail systems are essential for economic growth and regional connectivity, but aging infrastructures face challenges from increased demand and environmental factors. Traditional inspection methods, such as visual inspections, are inefficient and costly and pose safety risks. Unmanned Aerial Vehicles (UAVs) have become a viable alternative to rail mapping and monitoring. This study presents a robust method for the 3D extraction of rail tracks from UAV-based aerial imagery. The approach integrates YOLOv8 for initial detection and segmentation, photogrammetry for 3D data extraction and computer vision techniques with a Multiview approach to enhance accuracy. The tool was tested in a real-world complex scenario. Errors of 2 cm and 4 cm were obtained for planimetry and altimetry, respectively. The detection performance and metric results show a significant reduction in errors and increased precision compared to intermediate YOLO-based outputs. In comparison to most image-based methodologies, the tool has the advantage of generating both accurate altimetric and planimetric data. The generated data exceed the requirements for cartography at a scale of 1:500, as required by the Spanish regulations for photogrammetric works for rail infrastructures. The tool is integrated into the open-source QGIS platform; the tool is user-friendly and aims to improve rail system maintenance and safety. Full article
Show Figures

Figure 1

21 pages, 11982 KiB  
Article
Aerial-Drone-Based Tool for Assessing Flood Risk Areas Due to Woody Debris Along River Basins
by Innes Barbero-García, Diego Guerrero-Sevilla, David Sánchez-Jiménez, Ángel Marqués-Mateu and Diego González-Aguilera
Drones 2025, 9(3), 191; https://doi.org/10.3390/drones9030191 - 6 Mar 2025
Cited by 2 | Viewed by 1576
Abstract
River morphology is highly dynamic, requiring accurate datasets and models for effective management, especially in flood-prone regions. Climate change and urbanisation have intensified flooding events, increasing risks to populations and infrastructure. Woody debris, a natural element of river ecosystems, poses a dual challenge: [...] Read more.
River morphology is highly dynamic, requiring accurate datasets and models for effective management, especially in flood-prone regions. Climate change and urbanisation have intensified flooding events, increasing risks to populations and infrastructure. Woody debris, a natural element of river ecosystems, poses a dual challenge: while it provides critical habitats, it can obstruct water flow, exacerbate flooding, and threaten infrastructure. Traditional debris detection methods are time-intensive, hazardous, and limited in scope. This study introduces a novel tool integrating artificial intelligence (AI) and computer vision (CV) to detect woody debris in rivers using aerial drone imagery that is fully integrated into a geospatial Web platform (WebGIS). The tool identifies and segments debris, assigning risk levels based on obstruction severity. When using orthoimages as input data, the tool provides georeferenced locations and detailed reports to support flood mitigation and river management. The methodology encompasses drone data acquisition, photogrammetric processing, debris detection, and risk assessment, and it is validated using real-world data. The results show the tool’s capacity to detect large woody debris in a fully automatic manner. This approach automates woody debris detection and risk analysis, making it easier to manage rivers and providing valuable data for assessing flood risk. Full article
Show Figures

Figure 1

21 pages, 15517 KiB  
Article
3D Reconstruction of Building Blocks Based on Extraction of Exterior Wall Lines Using Point Cloud Density Generated from Spherical Camera Images
by Qazale Askari, Hossein Arefi and Mehdi Maboudi
Remote Sens. 2024, 16(23), 4377; https://doi.org/10.3390/rs16234377 - 23 Nov 2024
Viewed by 1348
Abstract
The 3D modeling of urban buildings has become a common research area in various disciplines such as photogrammetry and computer vision, with different applications such as intelligent city management, navigation of self-driving cars and architecture, just to name a few. The objective of [...] Read more.
The 3D modeling of urban buildings has become a common research area in various disciplines such as photogrammetry and computer vision, with different applications such as intelligent city management, navigation of self-driving cars and architecture, just to name a few. The objective of this study is to produce a 3D model of the external facade of the buildings with the required precision, accuracy and level of detail according to the user’s requirements, while minimizing time and cost. This research focuses on the production of 3D models for blocks of residential buildings in Tehran, Iran. The Insta 360 One X2 spherical camera is selected to capture the data due to its low cost and 360 × 180° field of view. The camera specifications have facilitated more efficient data collection in terms of both time and cost. The proposed modeling method is based on extracting lines of external walls through the utilization of the point cloud density concept. Initially, photogrammetric point clouds are generated in with a reconstruction precision of 0.24 m from spherical camera images. In the next step, the 3D point cloud is projected into a 2D point cloud by setting the height component to zero. The 2D point cloud is then rotated based on the direction angle determined by the Hough transform so that the perpendicular walls are parallel to the axes of the coordinate system. Next, a 2D point cloud density analysis is performed by voxelizing the point cloud and counting the number of points in each voxel in both the horizontal and vertical directions. By determining the peaks in the density plot, the lines of the external vertical and horizontal walls are extracted. To extract the diagonal external walls, the density analysis is performed in the direction of the first principal component. Finally, by determining the height of each wall in the point cloud, a 3D model is created at the level of detail one. The resulting model has a precision of 0.32 m compared to real sizes, and the 2D plan has a precision of 0.31 m compared to the ground truth map. The use of the spherical camera and point cloud density analysis makes this method efficient and cost-effective, making it a promising approach for future urban modeling projects. Full article
Show Figures

Figure 1

18 pages, 40637 KiB  
Article
Development of a Drone-Based Phenotyping System for European Pear Rust (Gymnosporangium sabinae) in Orchards
by Virginia Maß, Johannes Seidl-Schulz, Matthias Leipnitz, Eric Fritzsche, Martin Geyer, Michael Pflanz and Stefanie Reim
Agronomy 2024, 14(11), 2643; https://doi.org/10.3390/agronomy14112643 - 9 Nov 2024
Viewed by 1275
Abstract
Computer vision techniques offer promising tools for disease detection in orchards and can enable effective phenotyping for the selection of resistant cultivars in breeding programmes and research. In this study, a digital phenotyping system for disease detection and monitoring was developed using drones, [...] Read more.
Computer vision techniques offer promising tools for disease detection in orchards and can enable effective phenotyping for the selection of resistant cultivars in breeding programmes and research. In this study, a digital phenotyping system for disease detection and monitoring was developed using drones, object detection and photogrammetry, focusing on European pear rust (Gymnosporangium sabinae) as a model pathogen. High-resolution RGB images from ten low-altitude drone flights were collected in 2021, 2022 and 2023. A total of 16,251 annotations of leaves with pear rust symptoms were created on 584 images using the Computer Vision Annotation Tool (CVAT). The YOLO algorithm was used for the automatic detection of symptoms. A novel photogrammetric approach using Agisoft’s Metashape Professional software ensured the accurate localisation of symptoms. The geographic information system software QGIS calculated the infestation intensity per tree based on the canopy areas. This drone-based phenotyping system shows promising results and could considerably simplify the tasks involved in fruit breeding research. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 2821 KiB  
Entry
Oblique Aerial Images: Geometric Principles, Relationships and Definitions
by Styliani Verykokou and Charalabos Ioannidis
Encyclopedia 2024, 4(1), 234-255; https://doi.org/10.3390/encyclopedia4010019 - 2 Feb 2024
Cited by 2 | Viewed by 5957
Definition
Aerial images captured with the camera optical axis deliberately inclined with respect to the vertical are defined as oblique aerial images. Throughout the evolution of aerial photography, oblique aerial images have held a prominent place since its inception. While vertical airborne images dominated [...] Read more.
Aerial images captured with the camera optical axis deliberately inclined with respect to the vertical are defined as oblique aerial images. Throughout the evolution of aerial photography, oblique aerial images have held a prominent place since its inception. While vertical airborne images dominated in photogrammetric applications for over a century, the advancements in photogrammetry and computer vision algorithms, coupled with the growing accessibility of oblique images in the market, have propelled the rise of oblique images in recent times. Their emergence is attributed to inherent advantages they offer over vertical images. In this entry, basic definitions, geometric principles and relationships for oblique aerial images, necessary for understanding their underlying geometry, are presented. Full article
(This article belongs to the Section Engineering)
Show Figures

Graphical abstract

25 pages, 5518 KiB  
Article
High-Altitude Precision Landing by Smartphone Video Guidance Sensor and Sensor Fusion
by Joao Leonardo Silva Cotta, Hector Gutierrez, Ivan R. Bertaska, John P. Inness and John Rakoczy
Drones 2024, 8(2), 37; https://doi.org/10.3390/drones8020037 - 25 Jan 2024
Cited by 2 | Viewed by 3648
Abstract
This paper describes the deployment, integration, and demonstration of the Smartphone Video Guidance Sensor (SVGS) as novel technology for autonomous 6-DOF proximity maneuvers and high-altitude precision landing of UAVs via sensor fusion. The proposed approach uses a vision-based photogrammetric position and attitude sensor [...] Read more.
This paper describes the deployment, integration, and demonstration of the Smartphone Video Guidance Sensor (SVGS) as novel technology for autonomous 6-DOF proximity maneuvers and high-altitude precision landing of UAVs via sensor fusion. The proposed approach uses a vision-based photogrammetric position and attitude sensor (SVGS) to support the precise automated landing of a UAV from an initial altitude above 100 m to ground, guided by an array of landing beacons. SVGS information is fused with other on-board sensors at the flight control unit to estimate the UAV’s position and attitude during landing relative to a ground coordinate system defined by the landing beacons. While the SVGS can provide mm-level absolute positioning accuracy depending on range and beacon dimensions, the proper operation of the SVGS requires a line of sight between the camera and the beacon, and readings can be disturbed by environmental lighting conditions and reflections. SVGS readings can therefore be intermittent, and their update rate is not deterministic since the SVGS runs on an Android device. The sensor fusion of the SVGS with on-board sensors enables an accurate and reliable update of the position and attitude estimates during landing, providing improved performance compared to state-of-art automated landing technology based on an infrared beacon, but its implementation must address the challenges mentioned above. The proposed technique also shows significant advantages compared with state-of-the-art sensors for High-Altitude Landing, such as those based on LIDAR. Full article
Show Figures

Figure 1

21 pages, 4016 KiB  
Article
The SmartLandMaps Approach for Participatory Land Rights Mapping
by Claudia Lindner, Auriol Degbelo, Gergely Vassányi, Kaspar Kundert and Angela Schwering
Land 2023, 12(11), 2043; https://doi.org/10.3390/land12112043 - 10 Nov 2023
Cited by 4 | Viewed by 2761
Abstract
Millions of formal and informal land rights are still undocumented worldwide and there is a need for scalable techniques to facilitate that documentation. In this context, sketch mapping based on printed high-resolution satellite or aerial imagery is being promoted as a fit-for-purpose land [...] Read more.
Millions of formal and informal land rights are still undocumented worldwide and there is a need for scalable techniques to facilitate that documentation. In this context, sketch mapping based on printed high-resolution satellite or aerial imagery is being promoted as a fit-for-purpose land administration method and can be seen as a promising way to collect cadastral and land use information with the community in a rapid and cost-effective manner. The main disadvantage of paper-based mapping is the need for digitization to facilitate the integration with existing land administration information systems and the sustainable use of the data. Currently, this digitization is mostly done manually, which is time-consuming and error-prone. This article presents the SmartLandMaps approach to land rights mapping and digitization to address this gap. The recording involves the use of sketches during participatory mapping activities to delineate parcel boundaries, and the use of mobile phones to collect attribute information about spatial units and land rights holders. The digitization involves the use of photogrammetric techniques to derive a digital representation from the annotated paper maps, and the use of computer vision techniques to automate the extraction of parcel boundaries and stickers from raster maps. The approach was deployed in four scenarios across Africa, revealing its simplicity, versatility, efficiency, and cost-effectiveness. It can be regarded as a scalable alternative to traditional paper-based participatory land rights mapping. Full article
(This article belongs to the Special Issue Land, Innovation and Social Good 2.0)
Show Figures

Figure 1

18 pages, 20818 KiB  
Article
A Visual Odometry Pipeline for Real-Time UAS Geopositioning
by Jianli Wei and Alper Yilmaz
Drones 2023, 7(9), 569; https://doi.org/10.3390/drones7090569 - 5 Sep 2023
Cited by 3 | Viewed by 3088
Abstract
The state-of-the-art geopositioning is the Global Navigation Satellite System (GNSS), which operates based on the satellite constellation providing positioning, navigation, and timing services. While the Global Positioning System (GPS) is widely used to position an Unmanned Aerial System (UAS), it is not always [...] Read more.
The state-of-the-art geopositioning is the Global Navigation Satellite System (GNSS), which operates based on the satellite constellation providing positioning, navigation, and timing services. While the Global Positioning System (GPS) is widely used to position an Unmanned Aerial System (UAS), it is not always available and can be jammed, introducing operational liabilities. When the GPS signal is degraded or denied, the UAS navigation solution cannot rely on incorrect positions GPS provides, resulting in potential loss of control. This paper presents a real-time pipeline for geopositioning functionality using a down-facing monocular camera. The proposed approach is deployable using only a few initialization parameters, the most important of which is the map of the area covered by the UAS flight plan. Our pipeline consists of an offline geospatial quad-tree generation for fast information retrieval, a choice from a selection of landmark detection and matching schemes, and an attitude control mechanism that improves reference to acquired image matching. To evaluate our method, we collected several image sequences using various flight patterns with seasonal changes. The experiments demonstrate high accuracy and robustness to seasonal changes. Full article
(This article belongs to the Special Issue Advances in AI for Intelligent Autonomous Systems)
Show Figures

Figure 1

18 pages, 9369 KiB  
Article
Quantifying the Loss of Coral from a Bleaching Event Using Underwater Photogrammetry and AI-Assisted Image Segmentation
by Kai L. Kopecky, Gaia Pavoni, Erica Nocerino, Andrew J. Brooks, Massimiliano Corsini, Fabio Menna, Jordan P. Gallagher, Alessandro Capra, Cristina Castagnetti, Paolo Rossi, Armin Gruen, Fabian Neyer, Alessandro Muntoni, Federico Ponchio, Paolo Cignoni, Matthias Troyer, Sally J. Holbrook and Russell J. Schmitt
Remote Sens. 2023, 15(16), 4077; https://doi.org/10.3390/rs15164077 - 18 Aug 2023
Cited by 17 | Viewed by 10054
Abstract
Detecting the impacts of natural and anthropogenic disturbances that cause declines in organisms or changes in community composition has long been a focus of ecology. However, a tradeoff often exists between the spatial extent over which relevant data can be collected, and the [...] Read more.
Detecting the impacts of natural and anthropogenic disturbances that cause declines in organisms or changes in community composition has long been a focus of ecology. However, a tradeoff often exists between the spatial extent over which relevant data can be collected, and the resolution of those data. Recent advances in underwater photogrammetry, as well as computer vision and machine learning tools that employ artificial intelligence (AI), offer potential solutions with which to resolve this tradeoff. Here, we coupled a rigorous photogrammetric survey method with novel AI-assisted image segmentation software in order to quantify the impact of a coral bleaching event on a tropical reef, both at an ecologically meaningful spatial scale and with high spatial resolution. In addition to outlining our workflow, we highlight three key results: (1) dramatic changes in the three-dimensional surface areas of live and dead coral, as well as the ratio of live to dead colonies before and after bleaching; (2) a size-dependent pattern of mortality in bleached corals, where the largest corals were disproportionately affected, and (3) a significantly greater decline in the surface area of live coral, as revealed by our approximation of the 3D shape compared to the more standard planar area (2D) approach. The technique of photogrammetry allows us to turn 2D images into approximate 3D models in a flexible and efficient way. Increasing the resolution, accuracy, spatial extent, and efficiency with which we can quantify effects of disturbances will improve our ability to understand the ecological consequences that cascade from small to large scales, as well as allow more informed decisions to be made regarding the mitigation of undesired impacts. Full article
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)
Show Figures

Figure 1

20 pages, 24650 KiB  
Article
Fine-Grained 3D Modeling and Semantic Mapping of Coral Reefs Using Photogrammetric Computer Vision and Machine Learning
by Jiageng Zhong, Ming Li, Hanqi Zhang and Jiangying Qin
Sensors 2023, 23(15), 6753; https://doi.org/10.3390/s23156753 - 28 Jul 2023
Cited by 9 | Viewed by 3767
Abstract
Corals play a crucial role as the primary habitat-building organisms within reef ecosystems, forming expansive structures that extend over vast distances, akin to the way tall buildings define a city’s skyline. However, coral reefs are vulnerable to damage and destruction due to their [...] Read more.
Corals play a crucial role as the primary habitat-building organisms within reef ecosystems, forming expansive structures that extend over vast distances, akin to the way tall buildings define a city’s skyline. However, coral reefs are vulnerable to damage and destruction due to their inherent fragility and exposure to various threats, including the impacts of climate change. Similar to successful city management, the utilization of advanced underwater videography, photogrammetric computer vision, and machine learning can facilitate precise 3D modeling and the semantic mapping of coral reefs, aiding in their careful management and conservation to ensure their survival. This study focuses on generating detailed 3D mesh models, digital surface models, and orthomosaics of coral habitats by utilizing underwater coral images and control points. Furthermore, an innovative multi-modal deep neural network is designed to perform the pixel-wise semantic segmentation of orthomosaics, enabling the projection of resulting semantic maps onto a 3D space. Notably, this study achieves a significant milestone by accomplishing semantic fine-grained 3D modeling and rugosity evaluation of coral reefs with millimeter-level accuracy, providing a potent means to understand coral reef variations under climate change with high spatial and temporal resolution. Full article
(This article belongs to the Special Issue Marine Environmental Perception and Underwater Detection)
Show Figures

Figure 1

12 pages, 6676 KiB  
Article
R.A.O. Project Recovery: Methods and Approaches for the Recovery of a Photographic Archive for the Creation of a Photogrammetric Survey of a Site Unreachable over Time
by Vittorio Lauro, Marco Giovannangelo, Mariella De Riggi, Nicola Lanzaro and Vittorio Murtas
Heritage 2023, 6(6), 4710-4721; https://doi.org/10.3390/heritage6060250 - 7 Jun 2023
Cited by 1 | Viewed by 1928
Abstract
The goal of this research is to make photogrammetric surveys of the walls of Cortona from 2012 accessible using new methodologies for recovering photographic material. This will allow a team of archaeologists to carry out a virtual reconnaissance of the surveyed stretch of [...] Read more.
The goal of this research is to make photogrammetric surveys of the walls of Cortona from 2012 accessible using new methodologies for recovering photographic material. This will allow a team of archaeologists to carry out a virtual reconnaissance of the surveyed stretch of wall as well as provide the basis for future investigations into any potential changes that may have occurred in the wall since 2012. Photogrammetry is a widely used technique in archaeology that can help researchers accurately measure, reconstruct, and analyze different architectural components of the wall. By using state-of-the-art photogrammetric techniques, including advanced computer vision algorithms, our team aims to produce high-quality 3D models and accurate measurements of different parts of the wall. The results of this research project will enable archaeologists to gain a more comprehensive understanding of the layout of the fortifications and the role of the Cortonese walls in the historical context of the area. Additionally, the research project will provide a detailed documentation of the wall that will be useful for both archaeological researchers and cultural heritage organizations. Finally, the research project will also provide the basis for future investigations into potential changes that may have occurred in the wall since 2012, which will be important for monitoring conservation and restoration efforts and providing an up-to-date record of the wall’s state of preservation. Full article
(This article belongs to the Special Issue Non-invasive Technologies Applied in Cultural Heritage)
Show Figures

Figure 1

19 pages, 3010 KiB  
Article
Subgraph Learning for Topological Geolocalization with Graph Neural Networks
by Bing Zha and Alper Yilmaz
Sensors 2023, 23(11), 5098; https://doi.org/10.3390/s23115098 - 26 May 2023
Cited by 1 | Viewed by 2812
Abstract
One of the challenges of spatial cognition, such as self-localization and navigation, is to develop an efficient learning approach capable of mimicking human ability. This paper proposes a novel approach for topological geolocalization on the map using motion trajectory and graph neural networks. [...] Read more.
One of the challenges of spatial cognition, such as self-localization and navigation, is to develop an efficient learning approach capable of mimicking human ability. This paper proposes a novel approach for topological geolocalization on the map using motion trajectory and graph neural networks. Specifically, our learning method learns an embedding of the motion trajectory encoded as a path subgraph where the node and edge represent turning direction and relative distance information by training a graph neural network. We formulate the subgraph learning as a multi-class classification problem in which the output node IDs are interpreted as the object’s location on the map. After training using three map datasets with small, medium, and large sizes, the node localization tests on simulated trajectories generated from the map show 93.61%, 95.33%, and 87.50% accuracy, respectively. We also demonstrate similar accuracy for our approach on actual trajectories generated by visual-inertial odometry. The key benefits of our approach are as follows: (1) we take advantage of the powerful graph-modeling ability of neural graph networks, (2) it only requires a map in the form of a 2D graph, and (3) it only requires an affordable sensor that generates relative motion trajectory. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

18 pages, 11033 KiB  
Article
PhotoMatch: An Open-Source Tool for Multi-View and Multi-Modal Feature-Based Image Matching
by Esteban Ruiz de Oña, Inés Barbero-García, Diego González-Aguilera, Fabio Remondino, Pablo Rodríguez-Gonzálvez and David Hernández-López
Appl. Sci. 2023, 13(9), 5467; https://doi.org/10.3390/app13095467 - 27 Apr 2023
Cited by 5 | Viewed by 5615
Abstract
The accurate and reliable extraction and matching of distinctive features (keypoints) in multi-view and multi-modal datasets is still an open research topic in the photogrammetric and computer vision communities. However, one of the main milestones is selecting which method is a suitable choice [...] Read more.
The accurate and reliable extraction and matching of distinctive features (keypoints) in multi-view and multi-modal datasets is still an open research topic in the photogrammetric and computer vision communities. However, one of the main milestones is selecting which method is a suitable choice for specific applications. This encourages us to develop an educational tool that encloses different hand-crafted and learning-based feature-extraction methods. This article presents PhotoMatch, a didactical, open-source tool for multi-view and multi-modal feature-based image matching. The software includes a wide range of state-of-the-art methodologies for preprocessing, feature extraction and matching, including deep learning detectors and descriptors. It also provides tools for a detailed assessment and comparison of the different approaches, allowing the user to select the best combination of methods for each specific multi-view and multi-modal dataset. The first version of the tool was awarded by the ISPRS (ISPRS Scientific Initiatives, 2019). A set of thirteen case studies, including six multi-view and six multi-modal image datasets, is processed by following different methodologies, and the results provided by the software are analysed to show the capabilities of the tool. The PhotoMatch Installer and the source code are freely available. Full article
(This article belongs to the Special Issue Digital Image Processing: Advanced Technologies and Applications)
Show Figures

Figure 1

24 pages, 13414 KiB  
Article
A Comparison of UAV-Derived Dense Point Clouds Using LiDAR and NIR Photogrammetry in an Australian Eucalypt Forest
by Megan Winsen and Grant Hamilton
Remote Sens. 2023, 15(6), 1694; https://doi.org/10.3390/rs15061694 - 21 Mar 2023
Cited by 9 | Viewed by 3794
Abstract
Light detection and ranging (LiDAR) has been a tool of choice for 3D dense point cloud reconstructions of forest canopy over the past two decades, but advances in computer vision techniques, such as structure from motion (SfM) photogrammetry, have transformed 2D digital aerial [...] Read more.
Light detection and ranging (LiDAR) has been a tool of choice for 3D dense point cloud reconstructions of forest canopy over the past two decades, but advances in computer vision techniques, such as structure from motion (SfM) photogrammetry, have transformed 2D digital aerial imagery into a powerful, inexpensive and highly available alternative. Canopy modelling is complex and affected by a wide range of inputs. While studies have found dense point cloud reconstructions to be accurate, there is no standard approach to comparing outputs or assessing accuracy. Modelling is particularly challenging in native eucalypt forests, where the canopy displays abrupt vertical changes and highly varied relief. This study first investigated whether a remotely sensed LiDAR dense point cloud reconstruction of a native eucalypt forest completely reproduced canopy cover and accurately predicted tree heights. A further comparison was made with a photogrammetric reconstruction based solely on near-infrared (NIR) imagery to gain some insight into the contribution of the NIR spectral band to the 3D SfM reconstruction of native dry eucalypt open forest. The reconstructions did not produce comparable canopy height models and neither reconstruction completely reproduced canopy cover nor accurately predicted tree heights. Nonetheless, the LiDAR product was more representative of the eucalypt canopy than SfM-NIR. The SfM-NIR results were strongly affected by an absence of data in many locations, which was related to low canopy penetration by the passive optical sensor and sub-optimal feature matching in the photogrammetric pre-processing pipeline. To further investigate the contribution of NIR, future studies could combine NIR imagery captured at multiple solar elevations. A variety of photogrammetric pre-processing settings should continue to be explored in an effort to optimise image feature matching. Full article
Show Figures

Figure 1

17 pages, 6170 KiB  
Article
Precision Landing of a Quadcopter Drone by Smartphone Video Guidance Sensor in a GPS-Denied Environment
by Nicolas Bautista, Hector Gutierrez, John Inness and John Rakoczy
Sensors 2023, 23(4), 1934; https://doi.org/10.3390/s23041934 - 9 Feb 2023
Cited by 7 | Viewed by 3665
Abstract
This paper describes the deployment, integration, and demonstration of a Smartphone Video Guidance Sensor (SVGS) as a novel technology for autonomous 6-DOF proximity maneuvers and precision landing of a quadcopter drone. The proposed approach uses a vision-based photogrammetric position and attitude sensor (SVGS) [...] Read more.
This paper describes the deployment, integration, and demonstration of a Smartphone Video Guidance Sensor (SVGS) as a novel technology for autonomous 6-DOF proximity maneuvers and precision landing of a quadcopter drone. The proposed approach uses a vision-based photogrammetric position and attitude sensor (SVGS) to estimate the position of a landing target after video capture. A visual inertial odometry sensor (VIO) is used to provide position estimates of the UAV in a ground coordinate system during flight on a GPS-denied environment. The integration of both SVGS and VIO sensors enables the accurate updating of position setpoints during landing, providing improved performance compared with VIO-only landing, as shown in landing experiments. The proposed technique also shows significant operational advantages compared with state-of-the-art sensors for indoor landing, such as those based on augmented reality (AR) markers. Full article
Show Figures

Figure 1

Back to TopTop