Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,061)

Search Parameters:
Keywords = aerial images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1144 KB  
Article
Hypersector-Based Method for Real-Time Classification of Wind Turbine Blade Defects
by Lesia Dubchak, Bohdan Rusyn, Carsten Wolff, Tomasz Ciszewski, Anatoliy Sachenko and Yevgeniy Bodyanskiy
Energies 2026, 19(2), 442; https://doi.org/10.3390/en19020442 - 16 Jan 2026
Abstract
This paper presents a novel hypersector-based method with Fuzzy Learning Vector Quantization (FLVQ) for the real-time classification of wind turbine blade defects using data acquired by unmanned aerial vehicles (UAVs). Unlike conventional prototype-based FLVQ approaches that rely on Euclidean distance in the feature [...] Read more.
This paper presents a novel hypersector-based method with Fuzzy Learning Vector Quantization (FLVQ) for the real-time classification of wind turbine blade defects using data acquired by unmanned aerial vehicles (UAVs). Unlike conventional prototype-based FLVQ approaches that rely on Euclidean distance in the feature space, the proposed method models each defect class as a hypersector on an n-dimensional hypersphere, where class boundaries are defined by angular similarity and fuzzy membership transitions. This geometric reinterpretation of FLVQ constitutes the core innovation of the study, enabling improved class separability, robustness to noise, and enhanced interpretability under uncertain operating conditions. Feature vectors extracted via the pre-trained SqueezeNet convolutional network are normalized onto the hypersphere, forming compact directional clusters that serve as the geometric foundation of the FLVQ classifier. A fuzzy softmax membership function and an adaptive prototype-updating mechanism are introduced to handle class overlap and improve learning stability. Experimental validation on a custom dataset of 900 UAV-acquired images achieved 95% classification accuracy on test data and 98.3% on an independent dataset, with an average F1-score of 0.91. Comparative analysis with the classical FLVQ prototype demonstrated superior performance and noise robustness. Owing to its low computational complexity and transparent geometric decision structure, the developed model is well-suited for real-time deployment on UAV embedded systems. Furthermore, the proposed hypersector FLVQ framework is generic and can be extended to other renewable-energy diagnostic tasks, including solar and hydropower asset monitoring, contributing to enhanced energy security and sustainability. Full article
(This article belongs to the Special Issue Modeling, Control and Optimization of Wind Power Systems)
Show Figures

Figure 1

32 pages, 5410 KB  
Review
Ambrosia artemisiifolia in Hungary: A Review of Challenges, Impacts, and Precision Agriculture Approaches for Sustainable Site-Specific Weed Management Using UAV Technologies
by Sherwan Yassin Hammad, Gergő Péter Kovács and Gábor Milics
AgriEngineering 2026, 8(1), 30; https://doi.org/10.3390/agriengineering8010030 - 15 Jan 2026
Viewed by 54
Abstract
Weed management has become a critical agricultural practice, as weeds compete with crops for nutrients, host pests and diseases, and cause major economic losses. The invasive weed Ambrosia artemisiifolia (common ragweed) is particularly problematic in Hungary, endangering crop productivity and public health through [...] Read more.
Weed management has become a critical agricultural practice, as weeds compete with crops for nutrients, host pests and diseases, and cause major economic losses. The invasive weed Ambrosia artemisiifolia (common ragweed) is particularly problematic in Hungary, endangering crop productivity and public health through its fast proliferation and allergenic pollen. This review examines the current challenges and impacts of A. artemisiifolia while exploring sustainable approaches to its management through precision agriculture. Recent advancements in unmanned aerial vehicles (UAVs) equipped with advanced imaging systems, remote sensing, and artificial intelligence, particularly deep learning models such as convolutional neural networks (CNNs) and Support Vector Machines (SVMs), enable accurate detection, mapping, and classification of weed infestations. These technologies facilitate site-specific weed management (SSWM) by optimizing herbicide application, reducing chemical inputs, and minimizing environmental impacts. The results of recent studies demonstrate the high potential of UAV-based monitoring for real-time, data-driven weed management. The review concludes that integrating UAV and AI technologies into weed management offers a sustainable, cost-effective, mitigate the socioeconomic impacts and environmentally responsible solution, emphasizing the need for collaboration between agricultural researchers and technology developers to enhance precision agriculture practices in Hungary. Full article
Show Figures

Figure 1

20 pages, 2787 KB  
Article
FWISD: Flood and Waterfront Infrastructure Segmentation Dataset with Model Evaluations
by Kaiwen Xue and Cheng-Jie Jin
Remote Sens. 2026, 18(2), 281; https://doi.org/10.3390/rs18020281 - 15 Jan 2026
Viewed by 114
Abstract
The increasing severity of extreme weather events necessitates rapid methods for post-disaster damage assessment. Current remote sensing datasets often lack the spatial resolution required for a detailed evaluation of critical waterfront infrastructure, which is vulnerable during hurricanes. To address this limitation, we introduce [...] Read more.
The increasing severity of extreme weather events necessitates rapid methods for post-disaster damage assessment. Current remote sensing datasets often lack the spatial resolution required for a detailed evaluation of critical waterfront infrastructure, which is vulnerable during hurricanes. To address this limitation, we introduce the Flood and Waterfront Infrastructure Segmentation Dataset (FWISD), a new dataset constructed from high-resolution unmanned aerial vehicle imagery captured after a major hurricane, comprising 3750 annotated 1024 × 1024 pixel image patches. The dataset provides semantic labels for 11 classes, specifically designed to distinguish between intact and damaged structures. We conducted comprehensive experiments to evaluate the performance of both convolution and Transformer-based models. Our results indicate that hybrid models integrating Transformer encoders with convolutional decoders achieve a superior balance of contextual understanding and spatial precision. Regression analysis indicates that the distance to water has the maximum influence on the detection success rate, while comparative experiments emphasize the unique complexity of waterfront infrastructure compared to homogenous datasets. In summary, FWISD provides a valuable resource for developing and evaluating advanced models, establishing a foundation for automated systems that can improve the timeliness and precision of post-disaster response. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

25 pages, 6075 KB  
Article
High-Frequency Monitoring of Explosion Parameters and Vent Morphology During Stromboli’s May 2021 Crater-Collapse Activity Using UAS and Thermal Imagery
by Elisabetta Del Bello, Gaia Zanella, Riccardo Civico, Tullio Ricci, Jacopo Taddeucci, Daniele Andronico, Antonio Cristaldi and Piergiorgio Scarlato
Remote Sens. 2026, 18(2), 264; https://doi.org/10.3390/rs18020264 - 14 Jan 2026
Viewed by 196
Abstract
Stromboli’s volcanic activity fluctuates in intensity and style, and periods of heightened activity can trigger hazardous events such as crater collapses and lava overflows. This study investigates the volcano’s explosive behavior surrounding the 19 May 2021 crater-rim failure, which primarily affected the N2 [...] Read more.
Stromboli’s volcanic activity fluctuates in intensity and style, and periods of heightened activity can trigger hazardous events such as crater collapses and lava overflows. This study investigates the volcano’s explosive behavior surrounding the 19 May 2021 crater-rim failure, which primarily affected the N2 crater and partially involved N1, by integrating high-frequency thermal imaging and high-resolution unmanned aerial system (UAS) surveys to quantify eruption parameters and vent morphology. Typically, eruptive periods preceding vent instability are characterized by evident changes in geophysical parameters and by intensified explosive activity. This is quantitatively monitored mainly through explosion frequency, while other eruption parameters are assessed qualitatively and sporadically. Our results show that, in addition to explosion rate, the spattering rate, the predominance of bomb- and gas-rich explosions, and the number of active vents increased prior to the collapse, reflecting near-surface magma pressurization. UAS surveys revealed that the pre-collapse configuration of the northern craters contributed to structural vulnerability, while post-collapse vent realignment reflected magma’s adaptation to evolving stress conditions. The May 2021 events were likely influenced by morphological changes induced by the 2019 paroxysms, which increased collapse frequency and amplified the 2021 failure. These findings highlight the importance of integrating quantitative time series of multiple eruption parameters and high-frequency morphological surveys into monitoring frameworks to improve early detection of system disequilibrium and enhance hazard assessment at Stromboli and similar volcanic systems. Full article
Show Figures

Figure 1

22 pages, 6609 KB  
Article
CAMS-AI: A Coarse-to-Fine Framework for Efficient Small Object Detection in High-Resolution Images
by Zhanqi Chen, Zhao Chen, Baohui Yang, Qian Guo, Haoran Wang and Xiangquan Zeng
Remote Sens. 2026, 18(2), 259; https://doi.org/10.3390/rs18020259 - 14 Jan 2026
Viewed by 103
Abstract
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where [...] Read more.
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where targets often appear as small, distant objects and are extremely unevenly distributed. Applying standard detectors directly to such images yields poor results and extremely high miss rates. To improve the detection accuracy of small targets in high-resolution images, methods represented by Slicing Aided Hyper Inference (SAHI) have been widely adopted. However, in specific scenarios, SAHI’s drawbacks are dramatically amplified. Its strategy of uniform global slicing divides each original image into a fixed number of sub-images, many of which may be pure background (negative samples) containing no targets. This results in a significant waste of computational resources and a precipitous drop in inference speed, falling far short of practical application requirements. To resolve this conflict between accuracy and efficiency, this paper proposes an efficient detection framework named CAMS-AI (Clustering and Adaptive Multi-level Slicing for Aided Inference). CAMS-AI adopts a “coarse-to-fine” intelligent focusing strategy: First, a Region Proposal Network (RPN) is used to rapidly locate all potential target areas. Next, a clustering algorithm is employed to generate precise Regions of Interest (ROIs), effectively focusing computational resources on target-dense areas. Finally, an innovative multi-level slicing strategy and a high-precision model are applied only to these high-quality ROIs for fine-grained detection. Experimental results demonstrate that the CAMS-AI framework achieves a mean Average Precision (mAP) comparable to SAHI while significantly increasing inference speed. Taking the RT-DETR detector as an example, while achieving 96% of the mAP50–95 accuracy level of the SAHI method, CAMS-AI’s end-to-end frames per second (FPS) is 10.3 times that of SAHI, showcasing its immense application potential in real-world, high-resolution monitoring scenarios. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

24 pages, 6383 KB  
Article
FF-Mamba-YOLO: An SSM-Based Benchmark for Forest Fire Detection in UAV Remote Sensing Images
by Binhua Guo, Dinghui Liu, Zhou Shen and Tiebin Wang
J. Imaging 2026, 12(1), 43; https://doi.org/10.3390/jimaging12010043 - 13 Jan 2026
Viewed by 171
Abstract
Timely and accurate detection of forest fires through unmanned aerial vehicle (UAV) remote sensing target detection technology is of paramount importance. However, multiscale targets and complex environmental interference in UAV remote sensing images pose significant challenges during detection tasks. To address these obstacles, [...] Read more.
Timely and accurate detection of forest fires through unmanned aerial vehicle (UAV) remote sensing target detection technology is of paramount importance. However, multiscale targets and complex environmental interference in UAV remote sensing images pose significant challenges during detection tasks. To address these obstacles, this paper presents FF-Mamba-YOLO, a novel framework based on the principles of Mamba and YOLO (You Only Look Once) that leverages innovative modules and architectures to overcome these limitations. Specifically, we introduce MFEBlock and MFFBlock based on state space models (SSMs) in the backbone and neck parts of the network, respectively, enabling the model to effectively capture global dependencies. Second, we construct CFEBlock, a module that performs feature enhancement before SSM processing, improving local feature processing capabilities. Furthermore, we propose MGBlock, which adopts a dynamic gating mechanism, enhancing the model’s adaptive processing capabilities and robustness. Finally, we enhance the structure of Path Aggregation Feature Pyramid Network (PAFPN) to improve feature fusion quality and introduce DySample to enhance image resolution without significantly increasing computational costs. Experimental results on our self-constructed forest fire image dataset demonstrate that the model achieves 67.4% mAP@50, 36.3% mAP@50:95, and 64.8% precision, outperforming previous state-of-the-art methods. These results highlight the potential of FF-Mamba-YOLO in forest fire monitoring. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

22 pages, 3834 KB  
Article
Image-Based Spatio-Temporal Graph Learning for Diffusion Forecasting in Digital Management Systems
by Chenxi Du, Zhengjie Fu, Yifan Hu, Yibin Liu, Jingwen Cao, Siyuan Liu and Yan Zhan
Electronics 2026, 15(2), 356; https://doi.org/10.3390/electronics15020356 - 13 Jan 2026
Viewed by 180
Abstract
With the widespread application of high-resolution remote sensing imagery and unmanned aerial vehicle technologies in agricultural scenarios, accurately characterizing spatial pest diffusion from multi-temporal images has become a critical issue in intelligent agricultural management. To overcome the limitations of existing machine learning approaches [...] Read more.
With the widespread application of high-resolution remote sensing imagery and unmanned aerial vehicle technologies in agricultural scenarios, accurately characterizing spatial pest diffusion from multi-temporal images has become a critical issue in intelligent agricultural management. To overcome the limitations of existing machine learning approaches that focus mainly on static recognition and lack effective spatio-temporal diffusion modeling, a UAV-based pest diffusion prediction and simulation framework is proposed. Multi-temporal UAV RGB and multispectral imagery are jointly modeled using a graph-based representation of farmland parcels, while temporal modeling and environmental embedding mechanisms are incorporated to enable simultaneous prediction of diffusion intensity and propagation paths. Experiments conducted on two real agricultural regions, Bayan Nur and Tangshan, demonstrate that the proposed method consistently outperforms representative spatio-temporal baselines. Compared with ST-GCN, the proposed framework achieves approximately 17–22% reductions in MAE and MSE, together with 8–12% improvements in PMR, while maintaining robust classification performance with precision, recall, and F1-score exceeding 0.82. These results indicate that the proposed approach can provide reliable support for agricultural information systems and diffusion-aware decision generation. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

26 pages, 5686 KB  
Article
MAFMamba: A Multi-Scale Adaptive Fusion Network for Semantic Segmentation of High-Resolution Remote Sensing Images
by Boxu Li, Xiaobing Yang and Yingjie Fan
Sensors 2026, 26(2), 531; https://doi.org/10.3390/s26020531 - 13 Jan 2026
Viewed by 81
Abstract
With rapid advancements in sub-meter satellite and aerial imaging technologies, high-resolution remote sensing imagery has become a pivotal source for geospatial information acquisition. However, current semantic segmentation models encounter two primary challenges: (1) the inherent trade-off between capturing long-range global context and preserving [...] Read more.
With rapid advancements in sub-meter satellite and aerial imaging technologies, high-resolution remote sensing imagery has become a pivotal source for geospatial information acquisition. However, current semantic segmentation models encounter two primary challenges: (1) the inherent trade-off between capturing long-range global context and preserving precise local structural details—where excessive reliance on downsampled deep semantics often results in blurred boundaries and the loss of small objects and (2) the difficulty in modeling complex scenes with extreme scale variations, where objects of the same category exhibit drastically different morphological features. To address these issues, this paper introduces MAFMamba, a multi-scale adaptive fusion visual Mamba network tailored for high-resolution remote sensing images. To mitigate scale variation, we design a lightweight hybrid encoder incorporating an Adaptive Multi-scale Mamba Block (AMMB) in each stage. Driven by a Multi-scale Adaptive Fusion (MSAF) mechanism, the AMMB dynamically generates pixel-level weights to recalibrate cross-level features, establishing a robust multi-scale representation. Simultaneously, to strictly balance local details and global semantics, we introduce a Global–Local Feature Enhancement Mamba (GLMamba) in the decoder. This module synergistically integrates local fine-grained features extracted by convolutions with global long-range dependencies modeled by the Visual State Space (VSS) layer. Furthermore, we propose a Multi-Scale Cross-Attention Fusion (MSCAF) module to bridge the semantic gap between the encoder’s shallow details and the decoder’s high-level semantics via an efficient cross-attention mechanism. Extensive experiments on the ISPRS Potsdam and Vaihingen datasets demonstrate that MAFMamba surpasses state-of-the-art Convolutional Neural Network (CNN), Transformer, and Mamba-based methods in terms of mIoU and mF1 scores. Notably, it achieves superior accuracy while maintaining linear computational complexity and low memory usage, underscoring its efficiency in complex remote sensing scenarios. Full article
(This article belongs to the Special Issue Intelligent Sensors and Artificial Intelligence in Building)
Show Figures

Figure 1

26 pages, 19685 KB  
Article
UAV NDVI-Based Vigor Zoning Predicts PR-Protein Accumulation and Protein Instability in Chardonnay and Sauvignon Blanc Wines
by Adrián Vera-Esmeraldas, Mauricio Galleguillos, Mariela Labbé, Alejandro Cáceres-Mella, Francisco Rojo and Fernando Salazar
Plants 2026, 15(2), 243; https://doi.org/10.3390/plants15020243 - 13 Jan 2026
Viewed by 152
Abstract
Protein instability in white wines is mainly caused by pathogenesis-related (PR) proteins that survive winemaking and can form haze in bottle. Because PR-protein synthesis is modulated by vine stress, this study evaluated whether unmanned aerial vehicle (UAV) multispectral imagery and NDVI-based vigor zoning [...] Read more.
Protein instability in white wines is mainly caused by pathogenesis-related (PR) proteins that survive winemaking and can form haze in bottle. Because PR-protein synthesis is modulated by vine stress, this study evaluated whether unmanned aerial vehicle (UAV) multispectral imagery and NDVI-based vigor zoning can be used as early predictors of protein instability in commercial Chardonnay and Sauvignon Blanc wines. High-resolution multispectral images were acquired over two seasons (2023–2024) in two vineyards, and three vigor zones (high, medium, low) were delineated from the NDVI at the individual vine scale. A total of 180 georeferenced vines were sampled, and musts were analyzed for thaumatin-like proteins and chitinases via RP-HPLC. Separate microvinifications were carried out for each vigor zone and cultivar, and the resulting wines were evaluated for protein instability (heat test) and bentonite requirements. Low-vigor vines consistently produced musts with higher PR-protein concentrations, greater turbidity after heating, and higher bentonite demand than high-vigor vines, with stronger effects in Sauvignon Blanc. These vigor-dependent patterns were stable across vintages, despite contrasting seasonal conditions. Linear discriminant analysis using NDVI, PR-protein content, turbidity, and bentonite dosage correctly separated vigor classes. Overall, UAV NDVI–based vigor zoning provided a robust, non-destructive tool for identifying vineyard zones with increased risk of protein instability. This approach supports precision enology by enabling site-specific stabilization strategies that reduce overtreatment with bentonite and preserve white wine quality. Full article
Show Figures

Figure 1

25 pages, 2897 KB  
Review
Integrating UAVs and Deep Learning for Plant Disease Detection: A Review of Techniques, Datasets, and Field Challenges with Examples from Cassava
by Wasiu Akande Ahmed, Olayinka Ademola Abiola, Dongkai Yang, Seyi Festus Olatoyinbo and Guifei Jing
Horticulturae 2026, 12(1), 87; https://doi.org/10.3390/horticulturae12010087 - 12 Jan 2026
Viewed by 130
Abstract
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes [...] Read more.
Cassava remains a critical food-security crop across Africa and Southeast Asia but is highly vulnerable to diseases such as cassava mosaic disease (CMD) and cassava brown streak disease (CBSD). Traditional diagnostic approaches are slow, labor-intensive, and inconsistent under field conditions. This review synthesizes current advances in combining unmanned aerial vehicles (UAVs) with deep learning (DL) to enable scalable, data-driven cassava disease detection. It examines UAV platforms, sensor technologies, flight protocols, image preprocessing pipelines, DL architectures, and existing datasets, and it evaluates how these components interact within UAV–DL disease-monitoring frameworks. The review also compares model performance across convolutional neural network-based and Transformer-based architectures, highlighting metrics such as accuracy, recall, F1-score, inference speed, and deployment feasibility. Persistent challenges—such as limited UAV-acquired datasets, annotation inconsistencies, geographic model bias, and inadequate real-time deployment—are identified and discussed. Finally, the paper proposes a structured research agenda including lightweight edge-deployable models, UAV-ready benchmarking protocols, and multimodal data fusion. This review provides a consolidated reference for researchers and practitioners seeking to develop practical and scalable cassava-disease detection systems. Full article
Show Figures

Figure 1

55 pages, 1599 KB  
Review
The Survey of Evolutionary Deep Learning-Based UAV Intelligent Power Inspection
by Shanshan Fan and Bin Cao
Drones 2026, 10(1), 55; https://doi.org/10.3390/drones10010055 - 12 Jan 2026
Viewed by 251
Abstract
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the [...] Read more.
With the rapid development of the power Internet of Things (IoT), the traditional manual inspection mode can no longer meet the growing demand for power equipment inspection. Unmanned aerial vehicle (UAV) intelligent inspection technology, with its efficient and flexible features, has become the mainstream solution. The rapid development of computer vision and deep learning (DL) has significantly improved the accuracy and efficiency of UAV intelligent inspection systems for power equipment. However, mainstream deep learning models have complex structures, and manual design is time-consuming and labor-intensive. In addition, the images collected during the power inspection process by UAVs have problems such as complex backgrounds, uneven lighting, and significant differences in object sizes, which require expert DL domain knowledge and many trial-and-error experiments to design models suitable for application scenarios involving power inspection with UAVs. In response to these difficult problems, evolutionary computation (EC) technology has demonstrated unique advantages in simulating the natural evolutionary process. This technology can independently design lightweight and high-precision deep learning models by automatically optimizing the network structure and hyperparameters. Therefore, this review summarizes the development of evolutionary deep learning (EDL) technology and provides a reference for applying EDL in object detection models used in UAV intelligent power inspection systems. First, the application status of DL-based object detection models in power inspection is reviewed. Then, how EDL technology improves the performance of the models in challenging scenarios such as complex terrain and extreme weather is analyzed by optimizing the network architecture. Finally, the challenges and future research directions of EDL technology in the field of UAV power inspection are discussed, including key issues such as improving the environmental adaptability of the model and reducing computing energy consumption, providing theoretical references for promoting the development of UAV power inspection technology to a higher level. Full article
Show Figures

Figure 1

23 pages, 5292 KB  
Article
Research on Rapid 3D Model Reconstruction Based on 3D Gaussian Splatting for Power Scenarios
by Huanruo Qi, Yi Zhou, Chen Chen, Lu Zhang, Peipei He, Xiangyang Yan and Mengqi Zhai
Sustainability 2026, 18(2), 726; https://doi.org/10.3390/su18020726 - 10 Jan 2026
Viewed by 236
Abstract
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational [...] Read more.
As core infrastructure of power transmission networks, power towers require high-precision 3D models, which are critical for intelligent inspection and digital twin applications of power transmission lines. Traditional reconstruction methods, such as LiDAR scanning and oblique photogrammetry, suffer from issues including high operational risks, low modeling efficiency, and loss of fine details. To address these limitations, this paper proposes a 3D Gaussian Splatting (3DGS)-based method for power tower 3D reconstruction to enhance reconstruction efficiency and detail preservation capability. First, a multi-view data acquisition scheme combining “unmanned aerial vehicle + oblique photogrammetry” was designed to capture RGB images acquired by Unmanned Aerial Vehicle (UAV) platforms, which are used as the primary input for 3D reconstruction. Second, a sparse point cloud was generated via Structure from Motion. Finally, based on 3DGS, Gaussian model initialization, differentiable rendering, and adaptive density control were performed to produce high-precision 3D models of power towers. Taking two typical power tower types as experimental subjects, comparisons were made with the oblique photogrammetry + ContextCapture method. Experimental results demonstrate that 3DGS not only achieves high model completeness (with the reconstructed model nearly indistinguishable from the original images) but also excels in preserving fine details such as angle steels and cables. Additionally, the final modeling time is reduced by over 70% compared to traditional oblique photogrammetry. 3DGS enables efficient and high-precision reconstruction of power tower 3D models, providing a reliable technical foundation for digital twin applications in power transmission lines. By significantly improving reconstruction efficiency and reducing operational costs, the proposed method supports sustainable power infrastructure inspection, asset lifecycle management, and energy-efficient digital twin applications. Full article
Show Figures

Figure 1

28 pages, 2780 KB  
Article
UAV Flight Orientation and Height Influence on Tree Crown Segmentation in Agroforestry Systems
by Juan Rodrigo Baselly-Villanueva, Andrés Fernández-Sandoval, Sergio Fernando Pinedo Freyre, Evelin Judith Salazar-Hinostroza, Gloria Patricia Cárdenas-Rengifo, Ronald Puerta, José Ricardo Huanca Diaz, Gino Anthony Tuesta Cometivos, Geomar Vallejos-Torres, Gianmarco Goycochea Casas, Pedro Álvarez-Álvarez and Zool Hilmi Ismail
Forests 2026, 17(1), 87; https://doi.org/10.3390/f17010087 - 9 Jan 2026
Viewed by 237
Abstract
Precise crown segmentation is essential for assessing structure, competition, and productivity in agroforestry systems, but delineation is challenging due to canopy heterogeneity and variability in aerial imagery. This study analyzes how flight height and orientation affect segmentation accuracy in an agroforestry system of [...] Read more.
Precise crown segmentation is essential for assessing structure, competition, and productivity in agroforestry systems, but delineation is challenging due to canopy heterogeneity and variability in aerial imagery. This study analyzes how flight height and orientation affect segmentation accuracy in an agroforestry system of the Peruvian Amazon, using RGB images acquired with a DJI Mavic Mini 3 Pro UAV and the instance-segmentation models YOLOv8 and YOLOv11. Four flight heights (40, 50, 60, and 70 m) and two orientations (parallel and transversal) were analyzed in an agroforestry system composed of Cedrelinga cateniformis (Ducke) Ducke, Calycophyllum spruceanum (Benth.) Hook.f. ex K.Schum., and Virola pavonis (A.DC.) A.C. Sm. Results showed that a flight height of 60 m provided the highest delineation accuracy (F1 ≈ 0.88 for YOLOv8 and 0.84 for YOLOv11), indicating an optimal balance between resolution and canopy coverage. Although YOLOv8 achieved the highest precision under optimal conditions, it exhibited greater variability with changes in flight geometry. In contrast, YOLOv11 showed a more stable and robust performance, with generalization gaps below 0.02, reflecting a stronger adaptability to different acquisition conditions. At the species level, vertical position and crown morphological differences (Such as symmetry, branching angle, and bifurcation level) directly influenced detection accuracy. Cedrelinga cateniformis displayed dominant and asymmetric crowns; Calycophyllum spruceanum had narrow, co-dominant crowns; and Virola pavonis exhibited symmetrical and intermediate crowns. These traits were associated with the detection and confusion patterns observed across the models, highlighting the importance of crown architecture in automated segmentation and the potential of UAVs combined with YOLO algorithms for the efficient monitoring of tropical agroforestry systems. Full article
Show Figures

Figure 1

22 pages, 5754 KB  
Article
Low-Cost Deep Learning for Building Detection with Application to Informal Urban Planning
by Lucas González, Jamal Toutouh and Sergio Nesmachnow
ISPRS Int. J. Geo-Inf. 2026, 15(1), 36; https://doi.org/10.3390/ijgi15010036 - 9 Jan 2026
Viewed by 225
Abstract
This article studies the application of deep neural networks for automatic building detection in aerial RGB images. Special focus is put on accuracy robustness in both well-structured and poorly planned urban scenarios, which pose significant challenges due to occlusions, irregular building layouts, and [...] Read more.
This article studies the application of deep neural networks for automatic building detection in aerial RGB images. Special focus is put on accuracy robustness in both well-structured and poorly planned urban scenarios, which pose significant challenges due to occlusions, irregular building layouts, and limited contextual cues. The applied methodology considers several CNNs using only RBG images as input, and both validation and transfer capabilities are studied. U-Net-based models achieve the highest single-model accuracy, with an Intersection over Union (IoU) of 0.9101. A soft-voting ensemble of the best U-Net models further increases performance, reaching a best ensemble IoU of 0.9665, improving over state-of-the-art building detection methods on standard benchmarks. The approach demonstrates strong generalization using only RGB imagery, supporting scalable, low-cost applications in urban planning and geospatial analysis. Full article
(This article belongs to the Special Issue Testing the Quality of GeoAI-Generated Data for VGI Mapping)
Show Figures

Figure 1

18 pages, 4523 KB  
Article
Remote Sensing of Nematode Stress in Coffee: UAV-Based Multispectral and Thermal Imaging Approaches
by Daniele de Brum, Gabriel Araújo e Silva Ferraz, Luana Mendes dos Santos, Felipe Augusto Fernandes, Marco Antonio Zanella, Patrícia Ferreira Ponciano Ferraz, Willian César Terra, Vicente Paulo Campos, Thieres George Freire da Silva, Ênio Farias de França e Silva and Alexsandro Oliveira da Silva
AgriEngineering 2026, 8(1), 22; https://doi.org/10.3390/agriengineering8010022 - 8 Jan 2026
Viewed by 207
Abstract
Early and non-destructive detection of plant-parasitic nematodes is critical for implementing site-specific management in coffee production systems. This study evaluated the potential of unmanned aerial vehicle (UAV) multispectral and thermal imaging, combined with textural analysis, to detect Meloidogyne exigua infestation in Coffea arabica [...] Read more.
Early and non-destructive detection of plant-parasitic nematodes is critical for implementing site-specific management in coffee production systems. This study evaluated the potential of unmanned aerial vehicle (UAV) multispectral and thermal imaging, combined with textural analysis, to detect Meloidogyne exigua infestation in Coffea arabica (Topázio variety). Field surveys were conducted in two contrasting seasons (dry and rainy), and nematode incidence was identified and quantified by counting root galls. Vegetation indices (NDVI, GNDVI, NGRDI, NDRE, OSAVI), individual spectral bands, canopy temperature, and Haralick texture features were extracted from UAV-derived imagery and correlated with gall counts. Under the conditions of this experiment, strong correlations were observed between gall number and the red spectral band in both seasons (R > 0.60), while GNDVI (dry season) and NGRDI (rainy season) showed strong negative correlations with gall density. Thermal imaging revealed moderate positive correlations with infestation levels during the dry season, indicating potential for early stress detection when foliar symptoms were absent. Texture metrics from the red and green bands further improved detection capacity, particularly with a 3 × 3 pixel window at 135°. These results demonstrate that UAV-based multispectral and thermal imaging, enhanced by texture analysis, can provide reliable early indicators of nematode infestation in coffee. Full article
Show Figures

Figure 1

Back to TopTop