Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (168)

Search Parameters:
Keywords = orthoimage

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 9340 KiB  
Article
How GeoAI Improves Tourist Beach Environments: Micro-Scale UAV Detection and Spatial Analysis of Marine Debris
by Junho Ser and Byungyun Yang
Land 2025, 14(7), 1349; https://doi.org/10.3390/land14071349 - 25 Jun 2025
Viewed by 344
Abstract
With coastal tourism depending on clean beaches and litter surveys remaining manual, sparse, and costly, this study coupled centimeter-resolution UAV imagery with a Grid R-CNN detector to automate debris mapping on five beaches of Wonsan Island, Korea. Thirty-one Phantom 4 flights (0.83 cm [...] Read more.
With coastal tourism depending on clean beaches and litter surveys remaining manual, sparse, and costly, this study coupled centimeter-resolution UAV imagery with a Grid R-CNN detector to automate debris mapping on five beaches of Wonsan Island, Korea. Thirty-one Phantom 4 flights (0.83 cm GSD) produced 31,841 orthoimages, while 11 debris classes from the AI Hub dataset trained the model. The network reached 74.9% mAP and 78%/84.7% precision–recall while processing 2.87 images s−1 on a single RTX 3060 Ti, enabling a 6 km shoreline to be surveyed in under one hour. Georeferenced detections aggregated to 25 m grids showed that 57% of high-density cells lay within 100 m of the beach entrances or landward edges, and 86% within 200 m. These micro-patterns, which are difficult to detect in meter-scale imagery, suggest that entrance-focused cleanup strategies could reduce annual maintenance costs by approximately one-fifth. This highlights the potential of centimeter-scale GeoAI in supporting sustainable beach management. Full article
Show Figures

Figure 1

31 pages, 5498 KiB  
Technical Note
A Study on Vector-Based Processing and Texture Application Techniques for 3D Object Creation and Visualization
by Donghwi Kang, Jeongyeon Kim, Jongchan Lee, Haeju Lee, Jihyeok Kim and Jungwon Byun
Appl. Sci. 2025, 15(7), 4011; https://doi.org/10.3390/app15074011 - 5 Apr 2025
Viewed by 703
Abstract
This study proposes a technique for generating 3D objects from Shapefile-based 2D spatial data and converting them to comply with the CityGML 3.0 standard. In particular, the proposed Wise Interpolated Texture (hereafter referred to as WIT) technique optimizes texture mapping and enhances visual [...] Read more.
This study proposes a technique for generating 3D objects from Shapefile-based 2D spatial data and converting them to comply with the CityGML 3.0 standard. In particular, the proposed Wise Interpolated Texture (hereafter referred to as WIT) technique optimizes texture mapping and enhances visual quality. High-resolution Z-values were extracted using DEM data, and computational efficiency was improved by applying the constrained Delaunay triangulation algorithm. This study implemented more realistic visual representations using high-resolution orthorectified imagery (hereafter referred to as orthoimages) TIF files and improved data retrieval speed compared to existing raster methods through vector-based processing techniques. In this research, data weight reduction, parallel processing, and polygon simplification algorithms were applied to optimize the 3D model generation speed. Additionally, the WIT technique minimized discontinuity between textures and improved UV mapping alignment to achieve more natural and uniform textures. Experimental results confirmed that the proposed technique improved texture mapping speed, enhanced rendering quality, and increased large-scale data processing efficiency compared to conventional methods. Nevertheless, limitations still exist in real-time data integration and optimization of large-scale 3D models. Future research should consider dynamic modeling reflecting real-time image data, BIM data integration, and large-scale texture streaming techniques. Full article
Show Figures

Figure 1

16 pages, 8161 KiB  
Article
Influences of Tree Mortality on Fire Intensity and Burn Severity for a Southern California Forest Using Airborne and Satellite Imagery
by Nowshin Nawar, Douglas A. Stow, Philip Riggan, Robert Tissell, Daniel Sousa, Megan K. Jennings and Lynn Wolden
Fire 2025, 8(4), 144; https://doi.org/10.3390/fire8040144 - 2 Apr 2025
Viewed by 597
Abstract
In this study, we investigated the influence of pre-fire tree mortality on fire behavior. Although other studies have focused on the environmental factors affecting wildfire, the influence of pre-fire tree mortality has not been explored in detail. We used high-spatial-resolution (1.6 m) airborne [...] Read more.
In this study, we investigated the influence of pre-fire tree mortality on fire behavior. Although other studies have focused on the environmental factors affecting wildfire, the influence of pre-fire tree mortality has not been explored in detail. We used high-spatial-resolution (1.6 m) airborne multispectral orthoimages to detect and map pre-fire dead trees in a portion of the San Bernardino Mountains, where the ‘Old Fire’ burned in 2003, and assessed whether spatial patterns of fire intensity and burn severity coincide with patterns of tree mortality. Dead trees were mapped through a hybrid deep learning classification and manual editing approach and facilitated with Google Earth Pro historical images. Apparent thermal infrared (TIR) brightness temperature captured during the Old Fire was derived from maximum digital number values from FireMapper airborne thermal infrared imagery (7 m) as a measure of fire intensity. Burn severity was analyzed using normalized burn ratio maps derived from pre- and post-fire Landsat 5 satellite imagery (30 m). Pre-fire dead trees were prevalent with 192 dead trees and 108 live trees per ha, with most dead trees clustered near the northwestern part of the study area east of Lake Arrowhead. The degree of spatial correspondence among dead tree density, fire intensity, and burn severity was analyzed using graphical and statistical analyses. The results revealed a significant but weak spatial association of dead trees with fire intensity (R2 = 0.31) and burn severity (R2 = 0.14). The findings revealed that areas impacted by pre-fire tree mortality were subject to higher fire intensity, followed by severe burn effects, though other biophysical factors also influenced these fire behavior variables. These results contradict a previous study that found no effect of tree mortality on the behavior of the Old Fire. Full article
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)
Show Figures

Graphical abstract

21 pages, 11982 KiB  
Article
Aerial-Drone-Based Tool for Assessing Flood Risk Areas Due to Woody Debris Along River Basins
by Innes Barbero-García, Diego Guerrero-Sevilla, David Sánchez-Jiménez, Ángel Marqués-Mateu and Diego González-Aguilera
Drones 2025, 9(3), 191; https://doi.org/10.3390/drones9030191 - 6 Mar 2025
Cited by 2 | Viewed by 1545
Abstract
River morphology is highly dynamic, requiring accurate datasets and models for effective management, especially in flood-prone regions. Climate change and urbanisation have intensified flooding events, increasing risks to populations and infrastructure. Woody debris, a natural element of river ecosystems, poses a dual challenge: [...] Read more.
River morphology is highly dynamic, requiring accurate datasets and models for effective management, especially in flood-prone regions. Climate change and urbanisation have intensified flooding events, increasing risks to populations and infrastructure. Woody debris, a natural element of river ecosystems, poses a dual challenge: while it provides critical habitats, it can obstruct water flow, exacerbate flooding, and threaten infrastructure. Traditional debris detection methods are time-intensive, hazardous, and limited in scope. This study introduces a novel tool integrating artificial intelligence (AI) and computer vision (CV) to detect woody debris in rivers using aerial drone imagery that is fully integrated into a geospatial Web platform (WebGIS). The tool identifies and segments debris, assigning risk levels based on obstruction severity. When using orthoimages as input data, the tool provides georeferenced locations and detailed reports to support flood mitigation and river management. The methodology encompasses drone data acquisition, photogrammetric processing, debris detection, and risk assessment, and it is validated using real-world data. The results show the tool’s capacity to detect large woody debris in a fully automatic manner. This approach automates woody debris detection and risk analysis, making it easier to manage rivers and providing valuable data for assessing flood risk. Full article
Show Figures

Figure 1

21 pages, 20898 KiB  
Article
Combining UAV and Sentinel Satellite Data to Delineate Ecotones at Multiscale
by Yuxin Ma, Zhangjian Xie, Xiaolin She, Hans J. De Boeck, Weihong Liu, Chaoying Yang, Ninglv Li, Bin Wang, Wenjun Liu and Zhiming Zhang
Forests 2025, 16(3), 422; https://doi.org/10.3390/f16030422 - 26 Feb 2025
Viewed by 729
Abstract
Ecotones, i.e., transition zones between habitats, are important landscape features, yet they are often ignored in landscape monitoring. This study addresses the challenge of delineating ecotones at multiple scales by integrating multisource remote sensing data, including ultra-high-resolution RGB images, LiDAR data from UAVs, [...] Read more.
Ecotones, i.e., transition zones between habitats, are important landscape features, yet they are often ignored in landscape monitoring. This study addresses the challenge of delineating ecotones at multiple scales by integrating multisource remote sensing data, including ultra-high-resolution RGB images, LiDAR data from UAVs, and satellite data. We first developed a fine-resolution landcover map of three plots in Yunnan, China, with accurate delineation of ecotones using orthoimages and canopy height data derived from UAV-LiDAR. These maps were subsequently used as the training set for four machine learning models, from which the most effective model was selected as an upscaling model. The satellite data, encompassing Synthetic Aperture Radar (SAR; Sentinel-1), multispectral imagery (Sentinel-2), and topographic data, functioned as explanatory variables. The Random Forest model performed the best among the four models (kappa coefficient = 0.78), with the red band, shortwave infrared band, and vegetation red edge band as the most significant spectral variables. Using this RF model, we compared landscape patterns between 2017 and 2023 to test the model’s ability to quantify ecotone dynamics. We found an increase in ecotone over this period that can be attributed to an expansion of 0.287 km2 (1.1%). In sum, this study demonstrates the effectiveness of combining UAV and satellite data for precise, large-scale ecotone detection. This can enhance our understanding of the dynamic relationship between ecological processes and landscape pattern evolution. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

23 pages, 11219 KiB  
Article
New Paradigms for Geomorphological Mapping: A Multi-Source Approach for Landscape Characterization
by Martina Cignetti, Danilo Godone, Daniele Ferrari Trecate and Marco Baldo
Remote Sens. 2025, 17(4), 581; https://doi.org/10.3390/rs17040581 - 8 Feb 2025
Cited by 3 | Viewed by 1982
Abstract
The advent of geomatic techniques and novel sensors has opened the road to new approaches in mapping, including morphological ones. The evolution of a land portion and its graphical representation constitutes a fundamental aspect for scientific and land planning purposes. In this context, [...] Read more.
The advent of geomatic techniques and novel sensors has opened the road to new approaches in mapping, including morphological ones. The evolution of a land portion and its graphical representation constitutes a fundamental aspect for scientific and land planning purposes. In this context, new paradigms for geomorphological mapping, which are useful for modernizing traditional, geomorphological mapping, become necessary for the creation of scalable digital representation of processes and landforms. A fully remote mapping approach, based on multi-source and multi-sensor applications, was implemented for the recognition of landforms and processes. This methodology was applied to a study site located in central Italy, characterized by the presence of ‘calanchi’ (i.e., badlands). Considering primarily the increasing availability of regional LiDAR products, an automated landform classification, i.e., Geomorphons, was adopted to map landforms at the slope scale. Simultaneously, by collecting and digitizing a time-series of historical orthoimages, a multi-temporal analysis was performed. Finally, surveying the area with an unmanned aerial vehicle, exploiting the high-resolution digital terrain model and orthoimage, a local-scale geomorphological map was produced. The proposed approach has proven to be well capable of identifying the variety of processes acting on the pilot area, identifying various genetic types of geomorphic processes with a nested hierarchy, where runoff-associated landforms coexist with gravitational ones. Large ancient mass movement characterizes the upper part of the basin, forming deep-seated gravity deformation, highly remodeled by a set of widespread runoff features forming rills, gullies, and secondary shallow landslides. The extended badlands areas imposed on Plio-Pleistocene clays are typically affected by sheet wash and rill and gully erosion causing high potential of sediment loss and the occurrence of earth- and mudflows, often interfering and affecting agricultural areas and anthropic elements. This approach guarantees a multi-scale and multi-temporal cartographic model for a full-coverage representation of landforms, representing a useful tool for land planning purposes. Full article
Show Figures

Figure 1

18 pages, 6072 KiB  
Article
Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas
by Volker Reinprecht and Daniel Scott Kieffer
Remote Sens. 2025, 17(3), 405; https://doi.org/10.3390/rs17030405 - 24 Jan 2025
Cited by 4 | Viewed by 2272
Abstract
Variations in vegetation indices derived from multispectral images and digital terrain models from satellite imagery have been successfully used for reclamation and hazard management in former mining areas. However, low spatial resolution and the lack of sufficiently detailed information on surface morphology have [...] Read more.
Variations in vegetation indices derived from multispectral images and digital terrain models from satellite imagery have been successfully used for reclamation and hazard management in former mining areas. However, low spatial resolution and the lack of sufficiently detailed information on surface morphology have restricted such studies to large sites. This study investigates the application of small, unmanned aerial vehicles (UAVs) equipped with multispectral sensors for land cover classification and vegetation monitoring. The application of UAVs bridges the gap between large-scale satellite remote sensing techniques and terrestrial surveys. Photogrammetric terrain models and orthoimages (RGB and multispectral) obtained from repeated mapping flights between November 2023 and May 2024 were combined with an ALS-based reference terrain model for object-based image classification. The collected data enabled differentiation between natural forests and areas affected by former mining activities, as well as the identification of variations in vegetation density and growth rates on former mining areas. The results confirm that small UAVs provide a versatile and efficient platform for classifying and monitoring mining areas and forested landslides. Full article
Show Figures

Figure 1

23 pages, 14898 KiB  
Article
Methods for the Construction and Editing of an Efficient Control Network for the Photogrammetric Processing of Massive Planetary Remote Sensing Images
by Xin Ma, Chun Liu, Xun Geng, Sifen Wang, Tao Li, Jin Wang, Pengying Liu, Jiujiang Zhang, Qiudong Wang, Yuying Wang, Yinhui Wang and Zhen Peng
Remote Sens. 2024, 16(23), 4600; https://doi.org/10.3390/rs16234600 - 7 Dec 2024
Viewed by 901
Abstract
Planetary photogrammetry remains an important technical means of producing high-precision planetary maps. High-quality control networks are fundamental to successful bundle adjustment. However, current software tools used by the planetary mapping community to construct and edit control networks exhibit very low efficiency. Moreover, redundant [...] Read more.
Planetary photogrammetry remains an important technical means of producing high-precision planetary maps. High-quality control networks are fundamental to successful bundle adjustment. However, current software tools used by the planetary mapping community to construct and edit control networks exhibit very low efficiency. Moreover, redundant and invalid control points in the control network can further increase the time required for the bundle adjustment process. Due to a lack of targeted algorithm optimization, existing software tools and methods are unable to meet the photogrammetric processing requirements of massive planetary remote sensing images. To address these issues, we first proposed an efficient control network construction framework based on approximate orthoimage matching and hash quick search. Next, to effectively reduce the redundant control points in the control network and decrease the computation time required for bundle adjustment, we then proposed a control network-thinning algorithm based on a K-D tree fast search. Finally, we developed an automatic detection method based on ray tracing for identifying invalid control points in the control network. To validate the proposed methods, we conducted photogrammetric processing experiments using both the Lunar Reconnaissance Orbiter (LRO) narrow-angle camera (NAC) images and the Origins Spectral Interpretation Resource Identification Security Regolith Explorer (OSIRIS-REx) PolyCam images; we then compared the results with those derived from the famous open-source planetary photogrammetric software, the United States Geological Survey (USGS) Integrated Software for Imagers and Spectrometers (ISIS) version 8.0.0. The experimental results demonstrate that the proposed methods significantly improve the efficiency and quality of constructing control networks for large-scale planetary images. For thousands of planetary images, we were able to speed up the generation and editing of the control network by more than two orders of magnitude. Full article
Show Figures

Figure 1

18 pages, 12610 KiB  
Article
Automatic Registration of Panoramic Images and Point Clouds in Urban Large Scenes Based on Line Features
by Panke Zhang, Hao Ma, Liuzhao Wang, Ruofei Zhong, Mengbing Xu and Siyun Chen
Remote Sens. 2024, 16(23), 4450; https://doi.org/10.3390/rs16234450 - 27 Nov 2024
Viewed by 1197
Abstract
As the combination of panoramic images and laser point clouds becomes more and more widely used as a technique, the accurate determination of external parameters has become essential. However, due to the relative position change of the sensor and the time synchronization error, [...] Read more.
As the combination of panoramic images and laser point clouds becomes more and more widely used as a technique, the accurate determination of external parameters has become essential. However, due to the relative position change of the sensor and the time synchronization error, the automatic and accurate matching of the panoramic image and the point cloud is very challenging. In order to solve this problem, this paper proposes an automatic and accurate registration method for panoramic images and point clouds of urban large scenes based on line features. Firstly, the multi-modal point cloud line feature extraction algorithm is used to extract the edge of the point cloud. Based on the point cloud intensity orthoimage (an orthogonal image based on the point cloud’s intensity values), the edge of the road markings is extracted, and the geometric feature edge is extracted by the 3D voxel method. Using the established virtual projection correspondence for the panoramic image, the panoramic image is projected onto the virtual plane for edge extraction. Secondly, the accurate matching relationship is constructed by using the feature constraint of the direction vector, and the edge features from both sensors are refined and aligned to realize the accurate calculation of the registration parameters. The experimental results show that the proposed method shows excellent registration results in challenging urban scenes. The average registration error is better than 3 pixels, and the root mean square error (RMSE) is less than 1.4 pixels. Compared with the mainstream methods, it has advantages and can promote the further research and application of panoramic images and laser point clouds. Full article
Show Figures

Figure 1

8 pages, 3761 KiB  
Proceeding Paper
Preservation and Archiving of Historic Murals Using a Digital Non-Metric Camera
by Suhas Muralidhar and Ashutosh Bhardwaj
Eng. Proc. 2024, 82(1), 60; https://doi.org/10.3390/ecsa-11-20519 - 26 Nov 2024
Cited by 1 | Viewed by 487
Abstract
Digital non-metric cameras with high-resolution capabilities are being used in various domains such as digital heritage, artifact documentation, art conservation, and engineering applications. In this study, a novel approach consisting of the application of the combined use of close-range photogrammetry (CRP) and mapping [...] Read more.
Digital non-metric cameras with high-resolution capabilities are being used in various domains such as digital heritage, artifact documentation, art conservation, and engineering applications. In this study, a novel approach consisting of the application of the combined use of close-range photogrammetry (CRP) and mapping techniques is used to capture the depth of a mural digitally, serving as a database for the preservation and archiving of historic murals. The open hall next to the main sanctuary of the Virupaksha temple in Hampi, Karnataka, India, which is a UNESCO World Heritage site, depicts cultural events on a mural-covered ceiling. A mirrorless Sony Alpha 7 III camera with a full-frame 24 MP CMOS sensor mounted with a 50 mm lens and 24 mm lens has been used to acquire digital photographs with an image size of 6000 × 6000 pixels. The suggested framework incorporates five main steps: data acquisition, color correction, image mosaicking, orthorectification, and image filtering. The results show a high level of accuracy and precision attained during the image capture and processing steps. A comparative study was performed in which the 24 mm lens orthoimage resulted in an image size of 9131 × 14,910 and a pixel size of 1.05 mm, whereas the 50 mm lens produced a 14,283 × 21,676 image size and a pixel size of 0.596 mm of the mural on the ceiling. This degree of high spatial resolution is essential for maintaining the fine details of the artwork in the digital documentation as well as its historical context, subtleties, and painting techniques. The study’s findings demonstrate the effectiveness of using digital sensors with the close-range photogrammetry (CRP) technique as a useful method for recording and preserving historical ceiling murals. Full article
Show Figures

Figure 1

22 pages, 12882 KiB  
Article
Automated Cloud Shadow Detection from Satellite Orthoimages with Uncorrected Cloud Relief Displacements
by Hyeonggyu Kim, Wansang Yoon and Taejung Kim
Remote Sens. 2024, 16(21), 3950; https://doi.org/10.3390/rs16213950 - 23 Oct 2024
Viewed by 1642
Abstract
Clouds and their shadows significantly affect satellite imagery, resulting in a loss of radiometric information in the shadowed areas. This loss reduces the accuracy of land cover classification and object detection. Among various cloud shadow detection methods, the geometric-based method relies on the [...] Read more.
Clouds and their shadows significantly affect satellite imagery, resulting in a loss of radiometric information in the shadowed areas. This loss reduces the accuracy of land cover classification and object detection. Among various cloud shadow detection methods, the geometric-based method relies on the geometry of the sun and sensor to provide consistent results across diverse environments, ensuring better interpretability and reliability. It is well known that the direction of shadows in raw satellite images depends on the sun’s illumination and sensor viewing direction. Orthoimages are typically corrected for relief displacements caused by oblique sensor viewing, aligning the shadow direction with the sun. However, previous studies lacked an explicit experimental verification of this alignment, particularly for cloud shadows. We observed that this implication may not be realized for cloud shadows, primarily due to the unknown height of clouds. To verify this, we used Rapideye orthoimages acquired in various viewing azimuth and zenith angles and conducted experiments under two different cases: the first where the cloud shadow direction was estimated based only on the sun’s illumination, and the second where both the sun’s illumination and the sensor’s viewing direction were considered. Building on this, we propose an automated approach for cloud shadow detection. Our experiments demonstrated that the second case, which incorporates the sensor’s geometry, calculates a more accurate cloud shadow direction compared to the true angle. Although the angles in nadir images were similar, the second case in high-oblique images showed a difference of less than 4.0° from the true angle, whereas the first case exhibited a much larger difference, up to 21.3°. The accuracy results revealed that shadow detection using the angle from the second case improved the average F1 score by 0.17 and increased the average detection rate by 7.7% compared to the first case. This result confirms that, even if the relief displacement of clouds is not corrected in the orthoimages, the proposed method allows for more accurate cloud shadow detection. Our main contributions are in providing quantitative evidence through experiments for the application of sensor geometry and establishing a solid foundation for handling complex scenarios. This approach has the potential to extend to the detection of shadows in high-resolution satellite imagery or UAV images, as well as objects like high-rise buildings. Future research will focus on this. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

25 pages, 47040 KiB  
Article
Mapping Earth Hummocks in Daisetsuzan National Park in Japan Using UAV-SfM Framework
by Yu Meng, Teiji Watanabe, Yuichi S. Hayakawa, Yuki Sawada and Ting Wang
Remote Sens. 2024, 16(19), 3610; https://doi.org/10.3390/rs16193610 - 27 Sep 2024
Viewed by 1586
Abstract
Earth hummocks are periglacial landforms that are widely distributed in arctic and alpine regions. This study employed an uncrewed aerial vehicle (UAV) and a structure from motion (SfM) framework to map and analyze the spatial distribution and morphological characteristics of earth hummocks across [...] Read more.
Earth hummocks are periglacial landforms that are widely distributed in arctic and alpine regions. This study employed an uncrewed aerial vehicle (UAV) and a structure from motion (SfM) framework to map and analyze the spatial distribution and morphological characteristics of earth hummocks across an extensive area in Daisetsuzan National Park, Japan. The UAV-captured images were processed using SfM photogrammetry to create orthomosaic images and high-resolution DEMs. We identified the distribution and morphological characteristics of earth hummocks using orthoimages, hillshade maps, and DEMs and analyzed how their morphological parameters relate to topographical conditions. A total of 18,838 individual earth hummocks in an area of approximately 82,599 m² were mapped and analyzed across the two study areas, surpassing the scale of existing studies. The average length, width, and height of these earth hummocks are 1.22 m, 1.03 m, and 0.15 m, respectively, and topographical features such as slope, aspect, and landforms are demonstrated to have an essential influence on the morphology of the earth hummocks. These findings enhance our understanding of topographical features. Furthermore, this study demonstrates the efficacy of utilizing the UAV-SfM framework with multi-directional hillshade mapping as an alternative to manual field measurements in studying periglacial landforms in mountainous regions. Full article
(This article belongs to the Special Issue Remote Sensing for Mountain Ecosystems II)
Show Figures

Graphical abstract

22 pages, 4249 KiB  
Article
Estimating Methane Emissions in Rice Paddies at the Parcel Level Using Drone-Based Time Series Vegetation Indices
by Yongho Song, Cholho Song, Sol-E Choi, Joon Kim, Moonil Kim, Wonjae Hwang, Minwoo Roh, Sujong Lee and Woo-Kyun Lee
Drones 2024, 8(9), 459; https://doi.org/10.3390/drones8090459 - 4 Sep 2024
Viewed by 3166
Abstract
This study investigated a method for directly estimating methane emissions from rice paddy fields at the field level using drone-based time-series vegetation indices at a town scale. Drone optical and spectral images were captured approximately 15 times from April to November to acquire [...] Read more.
This study investigated a method for directly estimating methane emissions from rice paddy fields at the field level using drone-based time-series vegetation indices at a town scale. Drone optical and spectral images were captured approximately 15 times from April to November to acquire time-series vegetation indices and optical orthoimages. An empirical regression model validated in previous international studies was applied to calculate cumulative methane emissions throughout the rice cultivation process. Methane emissions were estimated using the vegetation index and yield data were used as input variables for each growth phase. Methane emissions from rice paddies showed maximum values of 309 kg CH4 ha−1, within a 7% range compared to similar studies, and minimum values of 138 kg CH4 ha−1, with differences ranging from 29% to 58%. The average emissions were calculated at 247 kg CH4/ha, revealing slightly lower average values but individual field values within a similar range. The results suggest that drone-based remote sensing technology is an efficient and cost-effective alternative to traditional field measurements for greenhouse gas emission assessments. However, adjustments and validations according to rice varieties and local cultivation environments are necessary. Overcoming these limitations can help establish sustainable agricultural management practices and achieve local greenhouse gas reduction targets. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

30 pages, 8276 KiB  
Article
Land Use/Cover Classification of Large Conservation Areas Using a Ground-Linked High-Resolution Unmanned Aerial Vehicle
by Lazaro J. Mangewa, Patrick A. Ndakidemi, Richard D. Alward, Hamza K. Kija, Emmanuel R. Nasolwa and Linus K. Munishi
Resources 2024, 13(8), 113; https://doi.org/10.3390/resources13080113 - 22 Aug 2024
Cited by 2 | Viewed by 1949
Abstract
High-resolution remote sensing platforms are crucial to map land use/cover (LULC) types. Unmanned aerial vehicle (UAV) technology has been widely used in the northern hemisphere, addressing the challenges facing low- to medium-resolution satellite platforms. This study establishes the scalability of Sentinel-2 LULC classification [...] Read more.
High-resolution remote sensing platforms are crucial to map land use/cover (LULC) types. Unmanned aerial vehicle (UAV) technology has been widely used in the northern hemisphere, addressing the challenges facing low- to medium-resolution satellite platforms. This study establishes the scalability of Sentinel-2 LULC classification with ground-linked UAV orthoimages to large African ecosystems, particularly the Burunge Wildlife Management Area in Tanzania. It involved UAV flights in 19 ground-surveyed plots followed by upscaling orthoimages to a 10 m × 10 m resolution to guide Sentinel-2 LULC classification. The results were compared with unguided Sentinel-2 using the best classifier (random forest, RFC) compared to support vector machines (SVMs) and maximum likelihood classification (MLC). The guided classification approach, with an overall accuracy (OA) of 94% and a kappa coefficient (k) of 0.92, outperformed the unguided classification approach (OA = 90%; k = 0.87). It registered grasslands (55.2%) as a major vegetated class, followed by woodlands (7.6%) and shrublands (4.7%). The unguided approach registered grasslands (43.3%), followed by shrublands (27.4%) and woodlands (1.7%). Powerful ground-linked UAV-based training samples and RFC improved the performance. The area size, heterogeneity, pre-UAV flight ground data, and UAV-based woody plant encroachment detection contribute to the study’s novelty. The findings are useful in conservation planning and rangelands management. Thus, they are recommended for similar conservation areas. Full article
Show Figures

Figure 1

37 pages, 6394 KiB  
Article
Insights into the Effects of Tile Size and Tile Overlap Levels on Semantic Segmentation Models Trained for Road Surface Area Extraction from Aerial Orthophotography
by Calimanut-Ionut Cira, Miguel-Ángel Manso-Callejo, Ramon Alcarria, Teresa Iturrioz and José-Juan Arranz-Justel
Remote Sens. 2024, 16(16), 2954; https://doi.org/10.3390/rs16162954 - 12 Aug 2024
Cited by 1 | Viewed by 2698
Abstract
Studies addressing the supervised extraction of geospatial elements from aerial imagery with semantic segmentation operations (including road surface areas) commonly feature tile sizes varying from 256 × 256 pixels to 1024 × 1024 pixels with no overlap. Relevant geo-computing works in the field [...] Read more.
Studies addressing the supervised extraction of geospatial elements from aerial imagery with semantic segmentation operations (including road surface areas) commonly feature tile sizes varying from 256 × 256 pixels to 1024 × 1024 pixels with no overlap. Relevant geo-computing works in the field often comment on prediction errors that could be attributed to the effect of tile size (number of pixels or the amount of information in the processed image) or to the overlap levels between adjacent image tiles (caused by the absence of continuity information near the borders). This study provides further insights into the impact of tile overlaps and tile sizes on the performance of deep learning (DL) models trained for road extraction. In this work, three semantic segmentation architectures were trained on data from the SROADEX dataset (orthoimages and their binary road masks) that contains approximately 700 million pixels of the positive “Road” class for the road surface area extraction task. First, a statistical analysis is conducted on the performance metrics achieved on unseen testing data featuring around 18 million pixels of the positive class. The goal of this analysis was to study the difference in mean performance and the main and interaction effects of the fixed factors on the dependent variables. The statistical tests proved that the impact on performance was significant for the main effects and for the two-way interaction between tile size and tile overlap and between tile size and DL architecture, at a level of significance of 0.05. We provide further insights and trends in the predictions of the extensive qualitative analysis carried out with the predictions of the best models at each tile size. The results indicate that training the DL models on larger tile sizes with a small percentage of overlap delivers better road representations and that testing different combinations of model and tile sizes can help achieve a better extraction performance. Full article
Show Figures

Figure 1

Back to TopTop