Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (119)

Search Parameters:
Keywords = orthomosaic imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4382 KiB  
Article
MTL-PlotCounter: Multitask Driven Soybean Seedling Counting at the Plot Scale Based on UAV Imagery
by Xiaoqin Xue, Chenfei Li, Zonglin Liu, Yile Sun, Xuru Li and Haiyan Song
Remote Sens. 2025, 17(15), 2688; https://doi.org/10.3390/rs17152688 - 3 Aug 2025
Viewed by 118
Abstract
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep [...] Read more.
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep learning regression model based on the TasselNetV2++ architecture, designed for plot-scale soybean seedling counting. It employs a patch-based training strategy combined with full-plot validation to achieve reliable performance with limited breeding plot data. To incorporate additional agronomic information, PlotCounter is extended into a multitask learning framework (MTL-PlotCounter) that integrates sowing metadata such as variety, number of seeds per hole, and sowing density as auxiliary classification tasks. RGB images of 54 breeding plots were captured in 2023 using a DJI Mavic 2 Pro UAV and processed into an orthomosaic for model development and evaluation, showing effective performance. PlotCounter achieves a root mean square error (RMSE) of 6.98 and a relative RMSE (rRMSE) of 6.93%. The variety-integrated MTL-PlotCounter, V-MTL-PlotCounter, performs the best, with relative reductions of 8.74% in RMSE and 3.03% in rRMSE compared to PlotCounter, and outperforms representative YOLO-based models. Additionally, both PlotCounter and V-MTL-PlotCounter are deployed on a web-based platform, enabling users to upload images via an interactive interface, automatically count seedlings, and analyze plot-scale emergence, powered by a multimodal large language model. This study highlights the potential of integrating UAV remote sensing, agronomic metadata, specialized deep learning models, and multimodal large language models for advanced crop monitoring. Full article
(This article belongs to the Special Issue Recent Advances in Multimodal Hyperspectral Remote Sensing)
Show Figures

Figure 1

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 - 31 Jul 2025
Viewed by 170
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

22 pages, 6010 KiB  
Article
Mapping Waterbird Habitats with UAV-Derived 2D Orthomosaic Along Belgium’s Lieve Canal
by Xingzhen Liu, Andrée De Cock, Long Ho, Kim Pham, Diego Panique-Casso, Marie Anne Eurie Forio, Wouter H. Maes and Peter L. M. Goethals
Remote Sens. 2025, 17(15), 2602; https://doi.org/10.3390/rs17152602 - 26 Jul 2025
Viewed by 461
Abstract
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, [...] Read more.
The accurate monitoring of waterbird abundance and their habitat preferences is essential for effective ecological management and conservation planning in aquatic ecosystems. This study explores the efficacy of unmanned aerial vehicle (UAV)-based high-resolution orthomosaics for waterbird monitoring and mapping along the Lieve Canal, Belgium. We systematically classified habitats into residential, industrial, riparian tree, and herbaceous vegetation zones, examining their influence on the spatial distribution of three focal waterbird species: Eurasian coot (Fulica atra), common moorhen (Gallinula chloropus), and wild duck (Anas platyrhynchos). Herbaceous vegetation zones consistently supported the highest waterbird densities, attributed to abundant nesting substrates and minimal human disturbance. UAV-based waterbird counts correlated strongly with ground-based surveys (R2 = 0.668), though species-specific detectability varied significantly due to morphological visibility and ecological behaviors. Detection accuracy was highest for coots, intermediate for ducks, and lowest for moorhens, highlighting the crucial role of image resolution ground sampling distance (GSD) in aerial monitoring. Operational challenges, including image occlusion and habitat complexity, underline the need for tailored survey protocols and advanced sensing techniques. Our findings demonstrate that UAV imagery provides a reliable and scalable method for monitoring waterbird habitats, offering critical insights for biodiversity conservation and sustainable management practices in aquatic landscapes. Full article
Show Figures

Figure 1

14 pages, 6120 KiB  
Article
Drones and Deep Learning for Detecting Fish Carcasses During Fish Kills
by Edna G. Fernandez-Figueroa, Stephanie R. Rogers and Dinesh Neupane
Drones 2025, 9(7), 482; https://doi.org/10.3390/drones9070482 - 8 Jul 2025
Viewed by 400
Abstract
Fish kills are sudden mass mortalities that occur in freshwater and marine systems worldwide. Fish kill surveys are essential for assessing the ecological and economic impacts of fish kill events, but are often labor-intensive, time-consuming, and spatially limited. This study aims to address [...] Read more.
Fish kills are sudden mass mortalities that occur in freshwater and marine systems worldwide. Fish kill surveys are essential for assessing the ecological and economic impacts of fish kill events, but are often labor-intensive, time-consuming, and spatially limited. This study aims to address these challenges by exploring the application of unoccupied aerial systems (or drones) and deep learning techniques for coastal fish carcass detection. Seven flights were conducted using a DJI Phantom 4 RGB quadcopter to monitor three sites with different substrates (i.e., sand, rock, shored Sargassum). Orthomosaics generated from drone imagery were useful for detecting carcasses washed ashore, but not floating or submerged carcasses. Single shot multibox detection (SSD) with a ResNet50-based model demonstrated high detection accuracy, with a mean average precision (mAP) of 0.77 and a mean average recall (mAR) of 0.81. The model had slightly higher average precision (AP) when detecting large objects (>42.24 cm long, AP = 0.90) compared to small objects (≤14.08 cm long, AP = 0.77) because smaller objects are harder to recognize and require more contextual reasoning. The results suggest a strong potential future application of these tools for rapid fish kill response and automatic enumeration and characterization of fish carcasses. Full article
Show Figures

Figure 1

28 pages, 11832 KiB  
Article
On the Minimum Dataset Requirements for Fine-Tuning an Object Detector for Arable Crop Plant Counting: A Case Study on Maize Seedlings
by Samuele Bumbaca and Enrico Borgogno-Mondino
Remote Sens. 2025, 17(13), 2190; https://doi.org/10.3390/rs17132190 - 25 Jun 2025
Viewed by 424
Abstract
Object detection is essential for precision agriculture applications like automated plant counting, but the minimum dataset requirements for effective model deployment remain poorly understood for arable crop seedling detection on orthomosaics. This study investigated how much annotated data is required to achieve standard [...] Read more.
Object detection is essential for precision agriculture applications like automated plant counting, but the minimum dataset requirements for effective model deployment remain poorly understood for arable crop seedling detection on orthomosaics. This study investigated how much annotated data is required to achieve standard counting accuracy (R2 = 0.85) for maize seedlings across different object detection approaches. We systematically evaluated traditional deep learning models requiring many training examples (YOLOv5, YOLOv8, YOLO11, RT-DETR), newer approaches requiring few examples (CD-ViTO), and methods requiring zero labeled examples (OWLv2) using drone-captured orthomosaic RGB imagery. We also implemented a handcrafted computer graphics algorithm as baseline. Models were tested with varying training sources (in-domain vs. out-of-distribution data), training dataset sizes (10–150 images), and annotation quality levels (10–100%). Our results demonstrate that no model trained on out-of-distribution data achieved acceptable performance, regardless of dataset size. In contrast, models trained on in-domain data reached the benchmark with as few as 60–130 annotated images, depending on architecture. Transformer-based models (RT-DETR) required significantly fewer samples (60) than CNN-based models (110–130), though they showed different tolerances to annotation quality reduction. Models maintained acceptable performance with only 65–90% of original annotation quality. Despite recent advances, neither few-shot nor zero-shot approaches met minimum performance requirements for precision agriculture deployment. These findings provide practical guidance for developing maize seedling detection systems, demonstrating that successful deployment requires in-domain training data, with minimum dataset requirements varying by model architecture. Full article
Show Figures

Figure 1

23 pages, 4440 KiB  
Article
Large-Scale Topographic Mapping Using RTK-GNSS and Multispectral UAV Drone Photogrammetric Surveys: Comparative Evaluation of Experimental Results
by Siyandza M. Dlamini and Yashon O. Ouma
Geomatics 2025, 5(2), 25; https://doi.org/10.3390/geomatics5020025 - 18 Jun 2025
Viewed by 1011
Abstract
The automation in image acquisition and processing using UAV drones has the potential to acquire terrain data that can be utilized for the accurate production of 2D and 3D digital data. In this study, the DJI Phantom 4 drone was employed for large-scale [...] Read more.
The automation in image acquisition and processing using UAV drones has the potential to acquire terrain data that can be utilized for the accurate production of 2D and 3D digital data. In this study, the DJI Phantom 4 drone was employed for large-scale topographical mapping, and based on the photogrammetric Structure-from-Motion (SfM) algorithm, drone-derived point clouds were used to generate the terrain DSM, DEM, contours, and the orthomosaic from which the topographical map features were digitized. An evaluation of the horizontal (X, Y) and vertical (Z) coordinates of the UAV drone points and the RTK-GNSS survey data showed that the Z-coordinates had the highest MAE(X,Y,Z), RMSE(X,Y,Z) and Accuracy(X,Y,Z) errors. An integrated georeferencing of the UAV drone imagery using the mobile RTK-GNSS base station improved the 2D and 3D positional accuracies with an average 2D (X, Y) accuracy of <2 mm and height accuracy of −2.324 mm, with an overall 3D accuracy of −4.022 mm. Geometrically, the average difference in the perimeter and areas of the features from the RTK-GNSS and UAV drone topographical maps were −0.26% and −0.23%, respectively. The results achieved the recommended positional accuracy standards for the production of digital geospatial data, demonstrating the cost-effectiveness of low-cost UAV drones for large-scale topographical mapping. Full article
Show Figures

Figure 1

22 pages, 6059 KiB  
Article
Optimization of Flight Planning for Orthomosaic Generation Using Digital Twins and SITL Simulation
by Alex Oña, Luis Ortega, Andrey Carrillo and Esteban Valencia
Drones 2025, 9(6), 407; https://doi.org/10.3390/drones9060407 - 31 May 2025
Viewed by 698
Abstract
Farming plays a crucial role in the development of countries striving to achieve Sustainable Development Goals (SDGs). However, in developing nations, low productivity and poor food quality often result from a lack of modernization. In this context, precision agriculture (PA) introduces techniques to [...] Read more.
Farming plays a crucial role in the development of countries striving to achieve Sustainable Development Goals (SDGs). However, in developing nations, low productivity and poor food quality often result from a lack of modernization. In this context, precision agriculture (PA) introduces techniques to enhance agricultural management and improve production. Recent advancements in PA require higher-resolution imagery. Unmanned aerial vehicles (UAVs) have emerged as a cost-effective and highly capable tool for crop monitoring, offering high-resolution data (3–5 cm). However, operating UAVs in sensitive environments or during testing phases involves risks, and errors can lead to significant costs. To address these challenges, software-in-the-loop (SITL) simulation, combined with digital twins (DTs), allows for studying UAV behavior and anticipating potential risks. Furthermore, effective flight planning is essential to optimize time and resources, requiring certain mission parameters to be properly configured to ensure efficient generation of quality orthomosaics. Unlike previous studies, this article presents a novel methodology that integrates the SITL framework with the Gazebo simulator, a digital model of a multirotor UAV, and a digital terrain model of interest, which together allows for the creation of a digital twin. This approach serves as a low-cost tool to analyze flight parameters in various scenarios and optimize mission planning before field execution. Specifically, multiple flight missions were scheduled based on high-resolution requirements, different overlap configurations (40–70% and 30–60%), and variable wind conditions. The results demonstrate that the proposed parameters optimize mission planning in terms of efficiency and quality. Through both quantitative and qualitative evaluations, it was evident that, for low-altitude flights, the configurations with the lowest overlap produce high-resolution orthomosaics while significantly reducing operational time. Full article
(This article belongs to the Special Issue Applications of UVs in Digital Photogrammetry and Image Processing)
Show Figures

Figure 1

23 pages, 9783 KiB  
Article
Assessing Heterogeneity of Surface Water Temperature Following Stream Restoration and a High-Intensity Fire from Thermal Imagery
by Matthew I. Barker, Jonathan D. Burnett, Ivan Arismendi and Michael G. Wing
Remote Sens. 2025, 17(7), 1254; https://doi.org/10.3390/rs17071254 - 1 Apr 2025
Viewed by 651
Abstract
Thermal heterogeneity of rivers is essential to support freshwater biodiversity. Salmon behaviorally thermoregulate by moving from patches of warm water to cold water. When implementing river restoration projects, it is essential to monitor changes in temperature and thermal heterogeneity through time to assess [...] Read more.
Thermal heterogeneity of rivers is essential to support freshwater biodiversity. Salmon behaviorally thermoregulate by moving from patches of warm water to cold water. When implementing river restoration projects, it is essential to monitor changes in temperature and thermal heterogeneity through time to assess the impacts to a river’s thermal regime. Lightweight sensors that record both thermal infrared (TIR) and multispectral data carried via unoccupied aircraft systems (UASs) present an opportunity to monitor temperature variations at high spatial (<0.5 m) and temporal resolution, facilitating the detection of the small patches of varying temperatures salmon require. Here, we present methods to classify and filter visible wetted area, including a novel procedure to measure canopy cover, and extract and correct radiant surface water temperature to evaluate changes in the variability of stream temperature pre- and post-restoration followed by a high-intensity fire in a section of the river corridor of the South Fork McKenzie River, Oregon. We used a simple linear model to correct the TIR data by imaging a water bath where the temperature increased from 9.5 to 33.4 °C. The resulting model reduced the mean absolute error from 1.62 to 0.35 °C. We applied this correction to TIR-measured temperatures of wetted cells classified using NDWI imagery acquired in the field. We found warmer conditions (+2.6 °C) after restoration (p < 0.001) and median absolute deviation for pre-restoration (0.30) to be less than both that of post-restoration (0.85) and post-fire (0.79) orthomosaics. In addition, there was statistically significant evidence to support the hypothesis of shifts in temperature distributions pre- and post-restoration (KS test 2009 vs. 2019, p < 0.001, D = 0.99; KS test 2019 vs. 2021, p < 0.001, D = 0.10). Moreover, we used a Generalized Additive Model (GAM) that included spatial and environmental predictors (i.e., canopy cover calculated from multispectral NDVI and photogrammetrically derived digital elevation model) to model TIR temperature from a transect along the main river channel. This model explained 89% of the deviance, and the predictor variables showed statistical significance. Collectively, our study underscored the potential of a multispectral/TIR sensor to assess thermal heterogeneity in large and complex river systems. Full article
Show Figures

Figure 1

24 pages, 30254 KiB  
Article
Assessing Spatiotemporal LST Variations in Urban Landscapes Using Diurnal UAV Thermography
by Nizar Polat and Abdulkadir Memduhoğlu
Appl. Sci. 2025, 15(7), 3448; https://doi.org/10.3390/app15073448 - 21 Mar 2025
Cited by 1 | Viewed by 449
Abstract
This study investigates the spatiotemporal dynamics of land surface temperature (LST) across five distinct land use/land cover (LULC) classes through high-resolution unmanned aerial vehicle (UAV) thermal remote sensing. Thermal orthomosaics were systematically captured at four diurnal periods (morning, afternoon, evening, and midnight) over [...] Read more.
This study investigates the spatiotemporal dynamics of land surface temperature (LST) across five distinct land use/land cover (LULC) classes through high-resolution unmanned aerial vehicle (UAV) thermal remote sensing. Thermal orthomosaics were systematically captured at four diurnal periods (morning, afternoon, evening, and midnight) over an urban university campus environment. Using stratified random sampling in each class with spatial controls to minimize autocorrelation, we quantified thermal signatures across bare soil, buildings, grassland, paved roads, and water bodies. Statistical analyses incorporating outlier management via the Interquartile Range (IQR) method, spatial autocorrelation assessment using Moran’s I, correlation testing, and Geographically Weighted Regression (GWR) revealed substantial thermal variability across LULC classes, with temperature differentials of up to 17.7 °C between grassland (20.57 ± 5.13 °C) and water bodies (7.10 ± 1.25 °C) during afternoon periods. The Moran’s I analysis indicated notable spatial dependence in land surface temperature, justifying the use of GWR to model these spatial patterns. Impervious surfaces demonstrated pronounced heat retention capabilities, with paved roads maintaining elevated temperatures into evening (13.18 ± 3.49 °C) and midnight (2.25 ± 1.51 °C) periods despite ambient cooling. Water bodies exhibited exceptional thermal stability (SD range: 0.79–2.85 °C across all periods), while grasslands showed efficient nocturnal cooling (ΔT = 23.02 °C from afternoon to midnight). GWR models identified spatially heterogeneous relationships between LST patterns and LULC distribution, with water bodies exerting the strongest localized cooling influence (R2≈ 0.62–0.68 during morning/evening periods). The findings demonstrate that surface material properties significantly modulate diurnal heat flux dynamics, with human-made surfaces contributing to prolonged thermal loading. This research advances urban microclimate monitoring methodologies by integrating high-resolution UAV thermal imagery with robust statistical frameworks, providing empirically-grounded insights for climate-adaptive urban planning and heat mitigation strategies. Future work should incorporate multi-seasonal observations, in situ validation instrumentation, and integration with human thermal comfort indices. Full article
(This article belongs to the Special Issue Technical Advances in UAV Photogrammetry and Remote Sensing)
Show Figures

Figure 1

23 pages, 26510 KiB  
Article
Improving the Individual Tree Parameters Estimation of a Complex Mixed Conifer—Broadleaf Forest Using a Combination of Structural, Textural, and Spectral Metrics Derived from Unmanned Aerial Vehicle RGB and Multispectral Imagery
by Jeyavanan Karthigesu, Toshiaki Owari, Satoshi Tsuyuki and Takuya Hiroshima
Geomatics 2025, 5(1), 12; https://doi.org/10.3390/geomatics5010012 - 10 Mar 2025
Cited by 1 | Viewed by 2029
Abstract
Individual tree parameters are essential for forestry decision-making, supporting economic valuation, harvesting, and silvicultural operations. While extensive research exists on uniform and simply structured forests, studies addressing complex, dense, and mixed forests with highly overlapping, clustered, and multiple tree crowns remain limited. This [...] Read more.
Individual tree parameters are essential for forestry decision-making, supporting economic valuation, harvesting, and silvicultural operations. While extensive research exists on uniform and simply structured forests, studies addressing complex, dense, and mixed forests with highly overlapping, clustered, and multiple tree crowns remain limited. This study bridges this gap by combining structural, textural, and spectral metrics derived from unmanned aerial vehicle (UAV) Red–Green–Blue (RGB) and multispectral (MS) imagery to estimate individual tree parameters using a random forest regression model in a complex mixed conifer–broadleaf forest. Data from 255 individual trees (115 conifers, 67 Japanese oak, and 73 other broadleaf species (OBL)) were analyzed. High-resolution UAV orthomosaic enabled effective tree crown delineation and canopy height models. Combining structural, textural, and spectral metrics improved the accuracy of tree height, diameter at breast height, stem volume, basal area, and carbon stock estimates. Conifers showed high accuracy (R2 = 0.70–0.89) for all individual parameters, with a high estimate of tree height (R2 = 0.89, RMSE = 0.85 m). The accuracy of oak (R2 = 0.11–0.49) and OBL (R2 = 0.38–0.57) was improved, with OBL species achieving relatively high accuracy for basal area (R2 = 0.57, RMSE = 0.08 m2 tree−1) and volume (R2 = 0.51, RMSE = 0.27 m3 tree−1). These findings highlight the potential of UAV metrics in accurately estimating individual tree parameters in a complex mixed conifer–broadleaf forest. Full article
Show Figures

Figure 1

22 pages, 6757 KiB  
Article
Co-Registration of Multi-Modal UAS Pushbroom Imaging Spectroscopy and RGB Imagery Using Optical Flow
by Ryan S. Haynes, Arko Lucieer, Darren Turner and Emiliano Cimoli
Drones 2025, 9(2), 132; https://doi.org/10.3390/drones9020132 - 11 Feb 2025
Cited by 1 | Viewed by 1021
Abstract
Remote sensing from unoccupied aerial systems (UASs) has witnessed exponential growth. The increasing use of imaging spectroscopy sensors and RGB cameras on UAS platforms demands accurate, cross-comparable multi-sensor data. Inherent errors during image capture or processing can introduce spatial offsets, diminishing spatial accuracy [...] Read more.
Remote sensing from unoccupied aerial systems (UASs) has witnessed exponential growth. The increasing use of imaging spectroscopy sensors and RGB cameras on UAS platforms demands accurate, cross-comparable multi-sensor data. Inherent errors during image capture or processing can introduce spatial offsets, diminishing spatial accuracy and hindering cross-comparison and change detection analysis. To address this, we demonstrate the use of an optical flow algorithm, eFOLKI, for co-registering imagery from two pushbroom imaging spectroscopy sensors (VNIR and NIR/SWIR) to an RGB orthomosaic. Our study focuses on two ecologically diverse vegetative sites in Tasmania, Australia. Both sites are structurally complex, posing challenging datasets for co-registration algorithms with initial georectification spatial errors of up to 9 m planimetrically. The optical flow co-registration significantly improved the spatial accuracy of the imaging spectroscopy relative to the RGB orthomosaic. After co-registration, spatial alignment errors were greatly improved, with RMSE and MAE values of less than 13 cm for the higher-spatial-resolution dataset and less than 33 cm for the lower resolution dataset, corresponding to only 2–4 pixels in both cases. These results demonstrate the efficacy of optical flow co-registration in reducing spatial discrepancies between multi-sensor UAS datasets, enhancing accuracy and alignment to enable robust environmental monitoring. Full article
Show Figures

Figure 1

20 pages, 6209 KiB  
Article
Monitoring and Prediction of Wild Blueberry Phenology Using a Multispectral Sensor
by Kenneth Anku, David Percival, Mathew Vankoughnett, Rajasekaran Lada and Brandon Heung
Remote Sens. 2025, 17(2), 334; https://doi.org/10.3390/rs17020334 - 19 Jan 2025
Viewed by 1124
Abstract
(1) Background: Research and development in remote sensing have been used to determine and monitor crop phenology. This approach assesses the internal and external changes of the plant. Therefore, the objective of this study was to determine the potential of using a multispectral [...] Read more.
(1) Background: Research and development in remote sensing have been used to determine and monitor crop phenology. This approach assesses the internal and external changes of the plant. Therefore, the objective of this study was to determine the potential of using a multispectral sensor to predict phenology in wild blueberry fields. (2) Method: A UAV equipped with a five-banded multispectral camera was used to collect aerial imagery. Sites consisted of two commercial fields, Lemmon Hill and Kemptown. An RCBD with six replications, four treatments, and a plot size of 6 × 8 m with a 2 m buffer between plots was used. Orthomosaic maps and vegetative indices were generated. (3) Results: There were significant correlations between VIs and growth parameters at different stages. The F4/F5 and F6/F7 stages showed significantly high correlation values among all growth stages. LAI, floral, and vegetative bud stages could be estimated at the tight cluster (F4/F5) and bloom (F6/F7) stages with R2/CCC = 0.90/0.84. Variable importance showed that NDVI, ENDVI, GLI, VARI, and GRVI contributed significantly to achieving these predicted values, with NDRE showing low effects. (4) Conclusion: This implies that the F4/F5 and F6/F7 stages are good stages for making phenological predictions and estimations about wild blueberry plants. Full article
Show Figures

Graphical abstract

16 pages, 21810 KiB  
Article
Enhancing Direct Georeferencing Using Real-Time Kinematic UAVs and Structure from Motion-Based Photogrammetry for Large-Scale Infrastructure
by Soohee Han and Dongyeob Han
Drones 2024, 8(12), 736; https://doi.org/10.3390/drones8120736 - 5 Dec 2024
Cited by 2 | Viewed by 1542
Abstract
The growing demand for high-accuracy mapping and 3D modeling using unmanned aerial vehicles (UAVs) has accelerated advancements in flight dynamics, positioning accuracy, and imaging technology. Structure from motion (SfM), a computer vision-based approach, is increasingly replacing traditional photogrammetry through facilitating the automation of [...] Read more.
The growing demand for high-accuracy mapping and 3D modeling using unmanned aerial vehicles (UAVs) has accelerated advancements in flight dynamics, positioning accuracy, and imaging technology. Structure from motion (SfM), a computer vision-based approach, is increasingly replacing traditional photogrammetry through facilitating the automation of processes such as aerial triangulation (AT), terrain modeling, and orthomosaic generation. This study examines methods to enhance the accuracy of SfM-based AT through real-time kinematic (RTK) UAV imagery, focusing on large-scale infrastructure applications, including a dam and its entire basin. The target area, primarily consisting of homogeneous water surfaces, poses considerable challenges for feature point extraction and image matching, which are crucial for effective SfM. To overcome these challenges and improve the AT accuracy, a constraint equation was applied, incorporating weighted 3D coordinates derived from RTK UAV data. Furthermore, oblique images were combined with nadir images to stabilize AT, and confidence-based filtering was applied to point clouds to enhance geometric quality. The results indicate that assigning appropriate weights to 3D coordinates and incorporating oblique imagery significantly improve the AT accuracy. This approach presents promising advancements for RTK UAV-based AT in SfM-challenging, large-scale environments, thus supporting more efficient and precise mapping applications. Full article
Show Figures

Figure 1

26 pages, 23951 KiB  
Article
Development of Methods for Satellite Shoreline Detection and Monitoring of Megacusp Undulations
by Riccardo Angelini, Eduard Angelats, Guido Luzi, Andrea Masiero, Gonzalo Simarro and Francesca Ribas
Remote Sens. 2024, 16(23), 4553; https://doi.org/10.3390/rs16234553 - 4 Dec 2024
Cited by 2 | Viewed by 2129
Abstract
Coastal zones, particularly sandy beaches, are highly dynamic environments subject to a variety of natural and anthropogenic forcings. Instantaneous shoreline is a widely used indicator of beach changes in image-based applications, and it can display undulations at different spatial and temporal scales. Megacusps, [...] Read more.
Coastal zones, particularly sandy beaches, are highly dynamic environments subject to a variety of natural and anthropogenic forcings. Instantaneous shoreline is a widely used indicator of beach changes in image-based applications, and it can display undulations at different spatial and temporal scales. Megacusps, periodic seaward and landward shoreline perturbations, are an example of such undulations that can significantly modify beach width and impact its usability. Traditionally, the study of these phenomena relied on video monitoring systems, which provide high-frequency imagery but limited spatial coverage. Instead, this study explored the potential of employing multispectral satellite-derived shorelines, specifically from Sentinel-2 (S2) and PlanetScope (PLN) platforms, for characterizing and monitoring megacusps’ formation and their dynamics over time. First, a tool was developed and validated to guarantee accurate shoreline detection, based on a combination of spectral indices, along with both thresholding and unsupervised clustering techniques. Validation of this shoreline detection phase was performed on three micro-tidal Mediterranean beaches, comparing with high-resolution orthomosaics and in-situ GNSS data, obtaining a good subpixel accuracy (with a mean absolute deviation of 1.5–5.5 m depending on the satellite type). Second, a tool for megacusp characterization was implemented and subsequent validation with reference data proved that satellite-derived shorelines could be used to robustly and accurately describe megacusps. The methodology could not only capture their amplitude and wavelength (of the order of 10 and 100 m, respectively) but also monitor their weekly–daily evolution using different potential metrics, thanks to combining S2 and PLN imagery. Our findings demonstrate that multispectral satellite imagery provides a viable and scalable solution for monitoring shoreline megacusp undulations, enhancing our understanding and offering an interesting option for coastal management. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

15 pages, 6065 KiB  
Article
Assessment of UAV-Based Deep Learning for Corn Crop Analysis in Midwest Brazil
by José Augusto Correa Martins, Alberto Yoshiriki Hisano Higuti, Aiesca Oliveira Pellegrin, Raquel Soares Juliano, Adriana Mello de Araújo, Luiz Alberto Pellegrin, Veraldo Liesenberg, Ana Paula Marques Ramos, Wesley Nunes Gonçalves, Diego André Sant’Ana, Hemerson Pistori and José Marcato Junior
Agriculture 2024, 14(11), 2029; https://doi.org/10.3390/agriculture14112029 - 11 Nov 2024
Cited by 1 | Viewed by 1238
Abstract
Crop segmentation, the process of identifying and delineating agricultural fields or specific crops within an image, plays a crucial role in precision agriculture, enabling farmers and public managers to make informed decisions regarding crop health, yield estimation, and resource allocation in Midwest Brazil. [...] Read more.
Crop segmentation, the process of identifying and delineating agricultural fields or specific crops within an image, plays a crucial role in precision agriculture, enabling farmers and public managers to make informed decisions regarding crop health, yield estimation, and resource allocation in Midwest Brazil. The crops (corn) in this region are being damaged by wild pigs and other diseases. For the quantification of corn fields, this paper applies novel computer-vision techniques and a new dataset of corn imagery composed of 1416 256 × 256 images and corresponding labels. We flew nine drone missions and classified wild pig damage in ten orthomosaics in different stages of growth using semi-automatic digitizing and deep-learning techniques. The period of crop-development analysis will range from early sprouting to the start of the drying phase. The objective of segmentation is to transform or simplify the representation of an image, making it more meaningful and easier to interpret. For the objective class, corn achieved an IoU of 77.92%, and for background 83.25%, using DeepLabV3+ architecture, 78.81% for corn, and 83.73% for background using SegFormer architecture. For the objective class, the accuracy metrics were achieved at 86.88% and for background 91.41% using DeepLabV3+, 88.14% for the objective, and 91.15% for background using SegFormer. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

Back to TopTop