Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (861)

Search Parameters:
Keywords = aerial image classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 10604 KiB  
Article
Fast Detection of Plants in Soybean Fields Using UAVs, YOLOv8x Framework, and Image Segmentation
by Ravil I. Mukhamediev, Valentin Smurygin, Adilkhan Symagulov, Yan Kuchin, Yelena Popova, Farida Abdoldina, Laila Tabynbayeva, Viktors Gopejenko and Alexey Oxenenko
Drones 2025, 9(8), 547; https://doi.org/10.3390/drones9080547 (registering DOI) - 1 Aug 2025
Abstract
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves [...] Read more.
The accuracy of classification and localization of plants on images obtained from the board of an unmanned aerial vehicle (UAV) is of great importance when implementing precision farming technologies. It allows for the effective application of variable rate technologies, which not only saves chemicals but also reduces the environmental load on cultivated fields. Machine learning algorithms are widely used for plant classification. Research on the application of the YOLO algorithm is conducted for simultaneous identification, localization, and classification of plants. However, the quality of the algorithm significantly depends on the training set. The aim of this study is not only the detection of a cultivated plant (soybean) but also weeds growing in the field. The dataset developed in the course of the research allows for solving this issue by detecting not only soybean but also seven weed species common in the fields of Kazakhstan. The article describes an approach to the preparation of a training set of images for soybean fields using preliminary thresholding and bound box (Bbox) segmentation of marked images, which allows for improving the quality of plant classification and localization. The conducted research and computational experiments determined that Bbox segmentation shows the best results. The quality of classification and localization with the application of Bbox segmentation significantly increased (f1 score increased from 0.64 to 0.959, mAP50 from 0.72 to 0.979); for a cultivated plant (soybean), the best classification results known to date were achieved with the application of YOLOv8x on images obtained from the UAV, with an f1 score = 0.984. At the same time, the plant detection rate increased by 13 times compared to the model proposed earlier in the literature. Full article
Show Figures

Figure 1

22 pages, 8105 KiB  
Article
Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing
by Jie Han, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang and Jiaxiu Zou
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 (registering DOI) - 1 Aug 2025
Abstract
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract [...] Read more.
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species. Full article
Show Figures

Figure 1

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 (registering DOI) - 31 Jul 2025
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

28 pages, 4007 KiB  
Article
Voting-Based Classification Approach for Date Palm Health Detection Using UAV Camera Images: Vision and Learning
by Abdallah Guettaf Temam, Mohamed Nadour, Lakhmissi Cherroun, Ahmed Hafaifa, Giovanni Angiulli and Fabio La Foresta
Drones 2025, 9(8), 534; https://doi.org/10.3390/drones9080534 - 29 Jul 2025
Viewed by 194
Abstract
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method [...] Read more.
In this study, we introduce the application of deep learning (DL) models, specifically convolutional neural networks (CNNs), for detecting the health status of date palm leaves using images captured by an unmanned aerial vehicle (UAV). The images are modeled using the Newton–Euler method to ensure stability and accurate image acquisition. These deep learning models are implemented by a voting-based classification (VBC) system that combines multiple CNN architectures, including MobileNet, a handcrafted CNN, VGG16, and VGG19, to enhance classification accuracy and robustness. The classifiers independently generate predictions, and a voting mechanism determines the final classification. This hybridization of image-based visual servoing (IBVS) and classifiers makes immediate adaptations to changing conditions, providing straightforward and smooth flying as well as vision classification. The dataset used in this study was collected using a dual-camera UAV, which captures high-resolution images to detect pests in date palm leaves. After applying the proposed classification strategy, the implemented voting method achieved an impressive accuracy of 99.16% on the test set for detecting health conditions in date palm leaves, surpassing individual classifiers. The obtained results are discussed and compared to show the effectiveness of this classification technique. Full article
Show Figures

Figure 1

21 pages, 15647 KiB  
Article
Research on Oriented Object Detection in Aerial Images Based on Architecture Search with Decoupled Detection Heads
by Yuzhe Kang, Bohao Zheng and Wei Shen
Appl. Sci. 2025, 15(15), 8370; https://doi.org/10.3390/app15158370 - 28 Jul 2025
Viewed by 222
Abstract
Object detection in aerial images can provide great support in traffic planning, national defense reconnaissance, hydrographic surveys, infrastructure construction, and other fields. Objects in aerial images are characterized by small pixel–area ratios, dense arrangements between objects, and arbitrary inclination angles. In response to [...] Read more.
Object detection in aerial images can provide great support in traffic planning, national defense reconnaissance, hydrographic surveys, infrastructure construction, and other fields. Objects in aerial images are characterized by small pixel–area ratios, dense arrangements between objects, and arbitrary inclination angles. In response to these characteristics and problems, we improved the feature extraction network Inception-ResNet using the Fast Architecture Search (FAS) module and proposed a one-stage anchor-free rotation object detector. The structure of the object detector is simple and only consists of convolution layers, which reduces the number of model parameters. At the same time, the label sampling strategy in the training process is optimized to resolve the problem of insufficient sampling. Finally, a decoupled object detection head is used to separate the bounding box regression task from the object classification task. The experimental results show that the proposed method achieves mean average precision (mAP) of 82.6%, 79.5%, and 89.1% on the DOTA1.0, DOTA1.5, and HRSC2016 datasets, respectively, and the detection speed reaches 24.4 FPS, which can meet the needs of real-time detection. Full article
(This article belongs to the Special Issue Innovative Applications of Artificial Intelligence in Engineering)
Show Figures

Figure 1

22 pages, 4664 KiB  
Article
Aerial Image-Based Crop Row Detection and Weed Pressure Mapping Method
by László Moldvai, Péter Ákos Mesterházi, Gergely Teschner and Anikó Nyéki
Agronomy 2025, 15(8), 1762; https://doi.org/10.3390/agronomy15081762 - 23 Jul 2025
Viewed by 251
Abstract
Accurate crop row detection is crucial for determining weed pressure (weeds item per square meter). However, this task is complicated by the similarity between crops and weeds, the presence of missing plants within rows, and the varying growth stages of both. Our hypothesis [...] Read more.
Accurate crop row detection is crucial for determining weed pressure (weeds item per square meter). However, this task is complicated by the similarity between crops and weeds, the presence of missing plants within rows, and the varying growth stages of both. Our hypothesis was that in drone imagery captured at altitudes of 20–30 m—where individual plant details are not discernible—weed presence among crops can be statistically detected, allowing for the generation of a weed distribution map. This study proposes a computer vision detection method using images captured by unmanned aerial vehicles (UAVs) consisting of six main phases. The method was tested on 208 images. The algorithm performs well under normal conditions; however, when the weed density is too high, it fails to detect the row direction properly and begins processing misleading data. To investigate these cases, 120 artificial datasets were created with varying parameters, and the scenarios were analyzed. It was found that a rate variable—in-row concentration ratio (IRCR)—can be used to determine whether the result is valid (usable) or invalid (to be discarded). The F1 score is a metric combining precision and recall using a harmonic mean, where “1” indicates that precision and recall are equally weighted, i.e., β = 1 in the general Fβ formula. In the case of moderate weed infestation, where 678 crop plants and 600 weeds were present, the algorithm achieved an F1 score of 86.32% in plant classification, even with a 4% row disturbance level. Furthermore, IRCR also indicates the level of weed pressure in the area. The correlation between the ground truth weed-to-crop ratio and the weed/crop classification rate produced by the algorithm is 98–99%. As a result, the algorithm is capable of filtering out heavily infested areas that require full weed control and capable of generating weed density maps on other cases to support precision weed management. Full article
Show Figures

Figure 1

21 pages, 3826 KiB  
Article
UAV-OVD: Open-Vocabulary Object Detection in UAV Imagery via Multi-Level Text-Guided Decoding
by Lijie Tao, Guoting Wei, Zhuo Wang, Zhaoshuai Qi, Ying Li and Haokui Zhang
Drones 2025, 9(7), 495; https://doi.org/10.3390/drones9070495 - 14 Jul 2025
Viewed by 458
Abstract
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore [...] Read more.
Object detection in drone-captured imagery has attracted significant attention due to its wide range of real-world applications, including surveillance, disaster response, and environmental monitoring. Although the majority of existing methods are developed under closed-set assumptions, and some recent studies have begun to explore open-vocabulary or open-world detection, their application to UAV imagery remains limited and underexplored. In this paper, we address this limitation by exploring the relationship between images and textual semantics to extend object detection in UAV imagery to an open-vocabulary setting. We propose a novel and efficient detector named Unmanned Aerial Vehicle Open-Vocabulary Detector (UAV-OVD), specifically designed for drone-captured scenes. To facilitate open-vocabulary object detection, we propose improvements from three complementary perspectives. First, at the training level, we design a region–text contrastive loss to replace conventional classification loss, allowing the model to align visual regions with textual descriptions beyond fixed category sets. Structurally, building on this, we introduce a multi-level text-guided fusion decoder that integrates visual features across multiple spatial scales under language guidance, thereby improving overall detection performance and enhancing the representation and perception of small objects. Finally, from the data perspective, we enrich the original dataset with synonym-augmented category labels, enabling more flexible and semantically expressive supervision. Experiments conducted on two widely used benchmark datasets demonstrate that our approach achieves significant improvements in both mean mAP and Recall. For instance, for Zero-Shot Detection on xView, UAV-OVD achieves 9.9 mAP and 67.3 Recall, 1.1 and 25.6 higher than that of YOLO-World. In terms of speed, UAV-OVD achieves 53.8 FPS, nearly twice as fast as YOLO-World and five times faster than DetrReg, demonstrating its strong potential for real-time open-vocabulary detection in UAV imagery. Full article
(This article belongs to the Special Issue Applications of UVs in Digital Photogrammetry and Image Processing)
Show Figures

Figure 1

19 pages, 1906 KiB  
Article
LADOS: Aerial Imagery Dataset for Oil Spill Detection, Classification, and Localization Using Semantic Segmentation
by Konstantinos Gkountakos, Maria Melitou, Konstantinos Ioannidis, Konstantinos Demestichas, Stefanos Vrochidis and Ioannis Kompatsiaris
Data 2025, 10(7), 117; https://doi.org/10.3390/data10070117 - 14 Jul 2025
Viewed by 421
Abstract
Oil spills on the water surface pose a significant environmental hazard, underscoring the critical need for developing Artificial Intelligence (AI) detection methods. Utilizing Unmanned Aerial Vehicles (UAVs) can significantly improve the efficiency of oil spill detection at early stages, reducing environmental damage; however, [...] Read more.
Oil spills on the water surface pose a significant environmental hazard, underscoring the critical need for developing Artificial Intelligence (AI) detection methods. Utilizing Unmanned Aerial Vehicles (UAVs) can significantly improve the efficiency of oil spill detection at early stages, reducing environmental damage; however, there is a lack of training datasets in the domain. In this paper, LADOS is introduced, an aeriaL imAgery Dataset for Oil Spill detection, classification, and localization by incorporating both liquid and solid classes of low-altitude images. LADOS comprises 3388 images annotated at the pixel level across six distinct classes, including the background. In addition to including a general oil class describing various oil spill appearances, LADOS provides a detailed categorization by including emulsions and sheens. Detailed examination of both instance and semantic segmentation approaches is illustrated to validate the dataset’s performance and significance to the domain. The results on the test set demonstrate an overall performance exceeding 66% mean Intersection over Union (mIoU), with specific classes such as oil and emulsion to surpass 74% of IoU part of the experiments. Full article
Show Figures

Figure 1

24 pages, 3294 KiB  
Review
Trends and Applications of Principal Component Analysis in Forestry Research: A Literature and Bibliometric Review
by Gabriel Murariu, Lucian Dinca and Dan Munteanu
Forests 2025, 16(7), 1155; https://doi.org/10.3390/f16071155 - 13 Jul 2025
Viewed by 422
Abstract
Principal component analysis (PCA) is a widely applied multivariate statistical technique across scientific disciplines, with forestry being one of its most dynamic areas of use. Its primary strength lies in reducing data dimensionality and classifying parameters within complex ecological datasets. This study provides [...] Read more.
Principal component analysis (PCA) is a widely applied multivariate statistical technique across scientific disciplines, with forestry being one of its most dynamic areas of use. Its primary strength lies in reducing data dimensionality and classifying parameters within complex ecological datasets. This study provides the first comprehensive bibliometric and literature review focused exclusively on PCA applications in forestry. A total of 96 articles published between 1993 and 2024 were analyzed using the Web of Science database and visualized using VOSviewer software, version 1.6.20. The bibliometric analysis revealed that the most active scientific fields were environmental sciences, forestry, and engineering, and the most frequently published journals were Forests and Sustainability. Contributions came from 198 authors across 44 countries, with China, Spain, and Brazil identified as leading contributors. PCA has been employed in a wide range of forestry applications, including species classification, biomass modeling, environmental impact assessment, and forest structure analysis. It is increasingly used to support decision-making in forest management, biodiversity conservation, and habitat evaluation. In recent years, emerging research has demonstrated innovative integrations of PCA with advanced technologies such as hyperspectral imaging, LiDAR, unmanned aerial vehicles (UAVs), and remote sensing platforms. These integrations have led to substantial improvements in forest fire detection, disease monitoring, and species discrimination. Furthermore, PCA has been combined with other analytical methods and machine learning models—including Lasso regression, support vector machines, and deep learning algorithms—resulting in enhanced data classification, feature extraction, and ecological modeling accuracy. These hybrid approaches underscore PCA’s adaptability and relevance in addressing contemporary challenges in forestry research. By systematically mapping the evolution, distribution, and methodological innovations associated with PCA, this study fills a critical gap in the literature. It offers a foundational reference for researchers and practitioners, highlighting both current trends and future directions for leveraging PCA in forest science and environmental monitoring. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

23 pages, 10698 KiB  
Article
Unmanned Aerial Vehicle-Based RGB Imaging and Lightweight Deep Learning for Downy Mildew Detection in Kimchi Cabbage
by Yang Lyu, Xiongzhe Han, Pingan Wang, Jae-Yeong Shin and Min-Woong Ju
Remote Sens. 2025, 17(14), 2388; https://doi.org/10.3390/rs17142388 - 10 Jul 2025
Viewed by 366
Abstract
Downy mildew is a highly destructive fungal disease that significantly reduces both the yield and quality of kimchi cabbage. Conventional detection methods rely on manual scouting, which is labor-intensive and prone to subjectivity. This study proposes an automated detection approach using RGB imagery [...] Read more.
Downy mildew is a highly destructive fungal disease that significantly reduces both the yield and quality of kimchi cabbage. Conventional detection methods rely on manual scouting, which is labor-intensive and prone to subjectivity. This study proposes an automated detection approach using RGB imagery acquired by an unmanned aerial vehicle (UAV), integrated with lightweight deep learning models for leaf-level identification of downy mildew. To improve disease feature extraction, Simple Linear Iterative Clustering (SLIC) segmentation was applied to the images. Among the evaluated models, Vision Transformer (ViT)-based architectures outperformed Convolutional Neural Network (CNN)-based models in terms of classification accuracy and generalization capability. For late-stage disease detection, DeiT-Tiny recorded the highest test accuracy (0.948) and macro F1-score (0.913), while MobileViT-S achieved the highest diseased recall (0.931). In early-stage detection, TinyViT-5M achieved the highest test accuracy (0.970) and macro F1-score (0.918); however, all models demonstrated reduced diseased recall under early-stage conditions, with DeiT-Tiny achieving the highest recall at 0.774. These findings underscore the challenges of identifying early symptoms using RGB imagery. Based on the classification results, prescription maps were generated to facilitate variable-rate pesticide application. Overall, this study demonstrates the potential of UAV-based RGB imaging for precision agriculture, while highlighting the importance of integrating multispectral data and utilizing domain adaptation techniques to enhance early-stage disease detection. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Food Security)
Show Figures

Figure 1

30 pages, 5474 KiB  
Article
WHU-RS19 ABZSL: An Attribute-Based Dataset for Remote Sensing Image Understanding
by Mattia Balestra, Marina Paolanti and Roberto Pierdicca
Remote Sens. 2025, 17(14), 2384; https://doi.org/10.3390/rs17142384 - 10 Jul 2025
Viewed by 301
Abstract
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts [...] Read more.
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts their application to more flexible, interpretable and generalisable learning frameworks. In this study, we introduce WHU-RS19 ABZSL: an attribute-based extension of the widely adopted WHU-RS19 dataset. This new version comprises 1005 high-resolution aerial images across 19 scene categories, each annotated with a vector of 38 features. These cover objects (e.g., roads and trees), geometric patterns (e.g., lines and curves) and dominant colours (e.g., green and blue), and are defined through expert-guided annotation protocols. To demonstrate the value of the dataset, we conduct baseline experiments using deep learning models that had been adapted for multi-label classification—ResNet18, VGG16, InceptionV3, EfficientNet and ViT-B/16—designed to capture the semantic complexity characteristic of real-world aerial scenes. The results, which are measured in terms of macro F1-score, range from 0.7385 for ResNet18 to 0.7608 for EfficientNet-B0. In particular, EfficientNet-B0 and ViT-B/16 are the top performers in terms of the overall macro F1-score and consistency across attributes, while all models show a consistent decline in performance for infrequent or visually ambiguous categories. This confirms that it is feasible to accurately predict semantic attributes in complex scenes. By enriching a standard benchmark with detailed, image-level semantic supervision, WHU-RS19 ABZSL supports a variety of downstream applications, including multi-label classification, explainable AI, semantic retrieval, and attribute-based ZSL. It thus provides a reusable, compact resource for advancing the semantic understanding of remote sensing and multimodal AI. Full article
(This article belongs to the Special Issue Remote Sensing Datasets and 3D Visualization of Geospatial Big Data)
Show Figures

Figure 1

30 pages, 3796 KiB  
Article
Applying Deep Learning Methods for a Large-Scale Riparian Vegetation Classification from High-Resolution Multimodal Aerial Remote Sensing Data
by Marcel Reinhardt, Edvinas Rommel, Maike Heuner and Björn Baschek
Remote Sens. 2025, 17(14), 2373; https://doi.org/10.3390/rs17142373 - 10 Jul 2025
Viewed by 283
Abstract
The unique vegetation in riparian zones is fundamental for various ecological and socio-economic functions in these transitional areas. Sustainable management requires detailed spatial information about the occurring flora. Here, we present a Deep Learning (DL)-based approach for processing multimodal high-resolution remote sensing data [...] Read more.
The unique vegetation in riparian zones is fundamental for various ecological and socio-economic functions in these transitional areas. Sustainable management requires detailed spatial information about the occurring flora. Here, we present a Deep Learning (DL)-based approach for processing multimodal high-resolution remote sensing data (aerial RGB and near-infrared (NIR) images and elevation maps) to generate a classification map of the tidal Elbe and a section of the Rhine River (Germany). The ground truth was based on existing mappings of vegetation and biotope types. The results showed that (I) despite a large class imbalance, for the tidal Elbe, a high mean Intersection over Union (IoU) of about 78% was reached. (II) At the Rhine River, a lower mean IoU was reached due to the limited amount of training data and labelling errors. Applying transfer learning methods and labelling error correction increased the mean IoU to about 60%. (III) Early fusion of the modalities was beneficial. (IV) The performance benefits from using elevation maps and the NIR channel in addition to RGB images. (V) Model uncertainty was successfully calibrated by using temperature scaling. The generalization ability of the trained model can be improved by adding more data from future aerial surveys. Full article
Show Figures

Figure 1

22 pages, 3827 KiB  
Article
Photothermal Integration of Multi-Spectral Imaging Data via UAS Improves Prediction of Target Traits in Oat Breeding Trials
by David Evershed, Jason Brook, Sandy Cowan, Irene Griffiths, Sara Tudor, Marc Loosley, John H. Doonan and Catherine J. Howarth
Agronomy 2025, 15(7), 1583; https://doi.org/10.3390/agronomy15071583 - 28 Jun 2025
Viewed by 276
Abstract
The modelling and prediction of important agronomic traits using remotely sensed data is an evolving science and an attractive concept for plant breeders, as manual crop phenotyping is both expensive and time consuming. Major limiting factors in creating robust prediction models include the [...] Read more.
The modelling and prediction of important agronomic traits using remotely sensed data is an evolving science and an attractive concept for plant breeders, as manual crop phenotyping is both expensive and time consuming. Major limiting factors in creating robust prediction models include the appropriate integration of data across different years and sites, and the availability of sufficient genetic and phenotypic diversity. Variable weather patterns, especially at higher latitudes, add to the complexity of this integration. This study introduces a novel approach by using photothermal time units to align spectral data from unmanned aerial system images of spring, winter, and facultative oat (Avena sativa) trials conducted over different years at a trial site at Aberystwyth, on the western Atlantic seaboard of the UK. The resulting regression and classification models for various agronomic traits are of significant interest to oat breeding programmes. The potential applications of these findings include optimising breeding strategies, improving crop yield predictions, and enhancing the efficiency of resource allocation in breeding programmes. Full article
Show Figures

Figure 1

26 pages, 14660 KiB  
Article
Succulent-YOLO: Smart UAV-Assisted Succulent Farmland Monitoring with CLIP-Based YOLOv10 and Mamba Computer Vision
by Hui Li, Fan Zhao, Feng Xue, Jiaqi Wang, Yongying Liu, Yijia Chen, Qingyang Wu, Jianghan Tao, Guocheng Zhang, Dianhan Xi, Jundong Chen and Hill Hiroki Kobayashi
Remote Sens. 2025, 17(13), 2219; https://doi.org/10.3390/rs17132219 - 28 Jun 2025
Viewed by 527
Abstract
Recent advances in unmanned aerial vehicle (UAV) technology combined with deep learning techniques have greatly improved agricultural monitoring. However, accurately processing images at low resolutions remains challenging for precision cultivation of succulents. To address this issue, this study proposes a novel method that [...] Read more.
Recent advances in unmanned aerial vehicle (UAV) technology combined with deep learning techniques have greatly improved agricultural monitoring. However, accurately processing images at low resolutions remains challenging for precision cultivation of succulents. To address this issue, this study proposes a novel method that combines cutting-edge super-resolution reconstruction (SRR) techniques with object detection and then applies the above model in a unified drone framework to achieve large-scale, reliable monitoring of succulent plants. Specifically, we introduce MambaIR, an innovative SRR method leveraging selective state-space models, significantly improving the quality of UAV-captured low-resolution imagery (achieving a PSNR of 23.83 dB and an SSIM of 79.60%) and surpassing current state-of-the-art approaches. Additionally, we develop Succulent-YOLO, a customized target detection model optimized for succulent image classification, achieving a mean average precision (mAP@50) of 87.8% on high-resolution images. The integrated use of MambaIR and Succulent-YOLO achieves an mAP@50 of 85.1% when tested on enhanced super-resolution images, closely approaching the performance on original high-resolution images. Through extensive experimentation supported by Grad-CAM visualization, our method effectively captures critical features of succulents, identifying the best trade-off between resolution enhancement and computational demands. By overcoming the limitations associated with low-resolution UAV imagery in agricultural monitoring, this solution provides an effective, scalable approach for evaluating succulent plant growth. Addressing image-quality issues further facilitates informed decision-making, reducing technical challenges. Ultimately, this study provides a robust foundation for expanding the practical use of UAVs and artificial intelligence in precision agriculture, promoting sustainable farming practices through advanced remote sensing technologies. Full article
Show Figures

Graphical abstract

16 pages, 2931 KiB  
Article
Advanced Solar Panel Fault Detection Using VGG19 and Jellyfish Optimization
by Salih Abraheem, Ziyodulla Yusupov, Javad Rahebi and Raheleh Ghadami
Processes 2025, 13(7), 2021; https://doi.org/10.3390/pr13072021 - 26 Jun 2025
Cited by 1 | Viewed by 425
Abstract
Solar energy has become a vital renewable energy source (RES), and photovoltaic (PV) systems play a key role in its utilization. However, the performance of these systems can be compromised by faulty panels. This paper proposes an innovative framework that combines the deep [...] Read more.
Solar energy has become a vital renewable energy source (RES), and photovoltaic (PV) systems play a key role in its utilization. However, the performance of these systems can be compromised by faulty panels. This paper proposes an innovative framework that combines the deep neural network VGG19 with the Jellyfish Optimization Search Algorithm (JFOSA) for efficient fault detection using aerial images. VGG19 excels in automatic feature extraction, while JFOSA optimizes feature selection and significantly improves classification performance. The new framework achieves impressive results, including 98.34% accuracy, 98.71% sensitivity, 98.69% specificity, and 94.03% AUC. These results outperform baseline models and various optimization techniques, including ant colony optimization (ACO), genetic algorithm (GA), and particle swarm optimization (PSO). The system demonstrated superior performance in detecting solar panel defects such as cracks, hot spots, and shadow defects, providing a robust, scalable, and automated solution for PV monitoring. This approach provides an efficient and reliable way to maintain energy efficiency and system reliability in solar energy applications. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

Back to TopTop