Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,189)

Search Parameters:
Keywords = multispectral image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5692 KiB  
Article
RiceStageSeg: A Multimodal Benchmark Dataset for Semantic Segmentation of Rice Growth Stages
by Jianping Zhang, Tailai Chen, Yizhe Li, Qi Meng, Yanying Chen, Jie Deng and Enhong Sun
Remote Sens. 2025, 17(16), 2858; https://doi.org/10.3390/rs17162858 (registering DOI) - 16 Aug 2025
Abstract
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers [...] Read more.
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers complementary and enriched spectral–spatial information, providing novel pathways for crop growth stage recognition in complex agricultural scenarios. However, the lack of publicly available multimodal datasets specifically designed for rice growth stage identification remains a significant bottleneck that limits the development and evaluation of relevant methods. To address this gap, we present RiceStageSeg, a multimodal benchmark dataset captured by unmanned aerial vehicles (UAVs), designed to support the development and assessment of segmentation models for rice growth monitoring. RiceStageSeg contains paired centimeter-level RGB and 10-band multispectral (MS) images acquired during several critical rice growth stages, including jointing and heading. Each image is accompanied by fine-grained, pixel-level annotations that distinguish between the different growth stages. We establish baseline experiments using several state-of-the-art semantic segmentation models under both unimodal (RGB-only, MS-only) and multimodal (RGB + MS fusion) settings. The experimental results demonstrate that multimodal feature-level fusion outperforms unimodal approaches in segmentation accuracy. RiceStageSeg offers a standardized benchmark to advance future research in multimodal semantic segmentation for agricultural remote sensing. The dataset will be made publicly available on GitHub v0.11.0 (accessed on 1 August 2025). Full article
Show Figures

Figure 1

31 pages, 8383 KiB  
Article
Quantifying Emissivity Uncertainty in Multi-Angle Long-Wave Infrared Hyperspectral Data
by Nikolay Golosov, Guido Cervone and Mark Salvador
Remote Sens. 2025, 17(16), 2823; https://doi.org/10.3390/rs17162823 - 14 Aug 2025
Abstract
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing [...] Read more.
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing angles. The data was collected using the Blue Heron LWIR hyperspectral imaging sensor, flown on a light aircraft in a circular orbit centered on the Penn State University campus. This sensor, with 256 spectral bands (7.56–13.52 μm), captures multiple overlapping images with varying ranges and angles. We analyzed nine different natural and man-made targets across varying viewing geometries. We present a multi-angle atmospheric correction method, similar to FLAASH-IR, modified for multi-angle scenarios. Our results show that emissivity remains relatively stable at viewing zenith angles between 40 and 50° but decreases as angles exceed 50°. We found that emissivity uncertainty varies across the spectral range, with the 10.14–11.05 μm region showing the greatest stability (standard deviations typically below 0.005), while uncertainty increases significantly in regions with strong atmospheric absorption features, particularly around 12.6 μm. These results show how reliable multi-angle hyperspectral measurements are and why angle-specific atmospheric correction matters for non-nadir imaging applications Full article
Show Figures

Figure 1

25 pages, 5194 KiB  
Article
A Graph-Based Superpixel Segmentation Approach Applied to Pansharpening
by Hind Hallabia
Sensors 2025, 25(16), 4992; https://doi.org/10.3390/s25164992 - 12 Aug 2025
Viewed by 219
Abstract
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral [...] Read more.
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral (MS) image to generate a unique comprehensive high-resolution MS image. As the performance of such a fusion method relies on the choice of the fusion strategy, and in particular, on the way the algorithm is used for estimating gain coefficients, our proposal is dedicated to computing the injection gains over a graph-driven segmentation map. The graph-based segments are obtained by applying simple linear iterative clustering (SLIC) on the MS image followed by a region adjacency graph (RAG) merging stage. This graphical representation of the segmentation map is used as guidance for spatial information to be injected during fusion processing. The high-resolution MS image is achieved by inferring locally the details in accordance with the local simplex injection fusion rule. The quality improvements achievable by our proposal are evaluated and validated at reduced and at full scales using two high resolution datasets collected by GeoEye-1 and WorldView-3 sensors. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 6208 KiB  
Article
Sweet—An Open Source Modular Platform for Contactless Hand Vascular Biometric Experiments
by David Geissbühler, Sushil Bhattacharjee, Ketan Kotwal, Guillaume Clivaz and Sébastien Marcel
Sensors 2025, 25(16), 4990; https://doi.org/10.3390/s25164990 - 12 Aug 2025
Viewed by 232
Abstract
Current finger-vein or palm-vein recognition systems usually require direct contact of the subject with the apparatus. This can be problematic in environments where hygiene is of primary importance. In this work we present a contactless vascular biometrics sensor platform named sweet which can [...] Read more.
Current finger-vein or palm-vein recognition systems usually require direct contact of the subject with the apparatus. This can be problematic in environments where hygiene is of primary importance. In this work we present a contactless vascular biometrics sensor platform named sweet which can be used for hand vascular biometrics studies (wrist, palm, and finger-vein) and surface features such as palmprint. It supports several acquisition modalities such as multi-spectral Near-Infrared (NIR), RGB-color, Stereo Vision (SV) and Photometric Stereo (PS). Using this platform we collected a dataset consisting of the fingers, palm and wrist vascular data of 120 subjects. We present biometric experimental results, focusing on Finger-Vein Recognition (FVR). Finally, we discuss fusion of multiple modalities. The acquisition software, parts of the hardware design, the new FV dataset, as well as source-code for our experiments are publicly available for research purposes. Full article
(This article belongs to the Special Issue Novel Optical Sensors for Biomedical Applications—2nd Edition)
Show Figures

Figure 1

9 pages, 1443 KiB  
Article
Imaging Through Scattering Tissue Based on NIR Multispectral Image Fusion Technique
by Nisan Atiya, Amir Shemer, Ariel Schwarz, Yevgeny Beiderman and Yossef Danan
Sensors 2025, 25(16), 4977; https://doi.org/10.3390/s25164977 - 12 Aug 2025
Viewed by 140
Abstract
Non-invasive diagnostics play a crucial role in medicine, and they ensure both contamination safety and patient comfort. The proposed study integrates hyperspectral imaging with advanced image fusion, enabling non-invasive, diagnostic procedure within tissue. It utilizes near-infrared (NIR) wavelength vision that is suitable for [...] Read more.
Non-invasive diagnostics play a crucial role in medicine, and they ensure both contamination safety and patient comfort. The proposed study integrates hyperspectral imaging with advanced image fusion, enabling non-invasive, diagnostic procedure within tissue. It utilizes near-infrared (NIR) wavelength vision that is suitable for reflections from objects within a dispersive layer, enabling the reconstruction of internal tissue layers images. It can detect objects, including cancerous tumors (presented as phantoms), inside human tissue. This involves processing data from multiple images taken in different NIR bands and merging them through image fusion techniques. Our research demonstrates evident data about objects within the diffusive media, visible only in the reconstructed images. The experimental results demonstrate a significant correlation with the samples employed in the study’s experimental design. Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

28 pages, 24868 KiB  
Article
Deep Meta-Connectivity Representation for Optically-Active Water Quality Parameters Estimation Through Remote Sensing
by Fangling Pu, Ziang Luo, Yiming Yang, Hongjia Chen, Yue Dai and Xin Xu
Remote Sens. 2025, 17(16), 2782; https://doi.org/10.3390/rs17162782 - 11 Aug 2025
Viewed by 168
Abstract
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on [...] Read more.
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on geographic proximity, and SimCLR, a domain-agnostic contrastive learning method, fail to capture land cover-driven water quality patterns, limiting their generalizability. To address this, we present deep meta-connectivity representation (DMCR), which integrates multispectral remote sensing imagery with limited in situ measurements to estimate OAWQ parameters. Our approach constructs meta-feature vectors from land cover images to represent the water quality characteristics of each multispectral remote sensing image tile. We introduce the meta-connectivity concept to quantify the OAWQ similarity between different tiles. Building on this concept, we design a contrastive self-supervised learning framework that uses sets of quadruple tiles extracted from Sentinel-2 imagery based on their meta-connectivity to learn DMCR vectors. After the core neural network is trained, we apply a random forest model to estimate parameters such as chlorophyll-a (Chl-a) and turbidity using matched in situ measurements and DMCR vectors across time and space. We evaluate DMCR on Lake Erie and Lake Ontario, generating a series of Chl-a and turbidity distribution maps. Performance is assessed using the R2 and RMSE metrics. Results show that meta-connectivity more effectively captures water quality similarities between tiles than widely utilized geographic proximity approaches such as those used in GeoTile2Vec. Furthermore, DMCR outperforms baseline models such as SimCLR with randomly cropped tiles. The resulting distribution maps align well with known factors influencing Chl-a and turbidity levels, confirming the method’s reliability. Overall, DMCR demonstrates strong potential for large-scale OAWQ estimation and contributes to improved monitoring of inland water bodies with limited in situ measurements through meta-connectivity-informed deep learning. The temporal-spatial water quality maps can support large-scale inland water monitoring, early warning of harmful algal blooms. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

26 pages, 3316 KiB  
Article
Land8Fire: A Complete Study on Wildfire Segmentation Through Comprehensive Review, Human-Annotated Multispectral Dataset, and Extensive Benchmarking
by Anh Tran, Minh Tran, Esteban Marti, Jackson Cothren, Chase Rainwater, Sandra Eksioglu and Ngan Le
Remote Sens. 2025, 17(16), 2776; https://doi.org/10.3390/rs17162776 - 11 Aug 2025
Viewed by 302
Abstract
Early and accurate wildfire detection is critical for minimizing environmental damage and ensuring a timely response. However, existing satellite-based wildfire datasets suffer from limitations such as coarse ground truth, poor spectral coverage, and class imbalance, which hinder progress in developing robust segmentation models. [...] Read more.
Early and accurate wildfire detection is critical for minimizing environmental damage and ensuring a timely response. However, existing satellite-based wildfire datasets suffer from limitations such as coarse ground truth, poor spectral coverage, and class imbalance, which hinder progress in developing robust segmentation models. In this paper, we introduce Land8Fire, a new large-scale wildfire segmentation dataset composed of over 20,000 multispectral image patches derived from Landsat 8 and manually annotated for high-quality fire masks. Building on the ActiveFire dataset, Land8Fire improves ground truth reliability and offers predefined splits for consistent benchmarking. We evaluate a range of state-of-the-art convolutional and transformer-based models, including UNet, DeepLabV3+, SegFormer, and Mask2Former, and investigate the impact of different objective functions (cross-entropy and focal losses) and spectral band combinations (B1–B11). Our results reveal that focal loss, though effective for small object detection, underperforms in scenarios with clustered fires, leading to reduced recall. In contrast, spectral analysis highlights the critical role of short-wave infared 1 (SWIR1) and short-wave infared 2 (SWIR2) bands, with further gains observed when including near infrared (NIR) to penetrate smoke and cloud cover. Land8Fire sets a new benchmark for wildfire segmentation and provides valuable insights for advancing fire detection research in remote sensing. Full article
Show Figures

Figure 1

21 pages, 9664 KiB  
Article
A Detection Approach for Wheat Spike Recognition and Counting Based on UAV Images and Improved Faster R-CNN
by Donglin Wang, Longfei Shi, Huiqing Yin, Yuhan Cheng, Shaobo Liu, Siyu Wu, Guangguang Yang, Qinge Dong, Jiankun Ge and Yanbin Li
Plants 2025, 14(16), 2475; https://doi.org/10.3390/plants14162475 - 9 Aug 2025
Viewed by 292
Abstract
This study presents an innovative unmanned aerial vehicle (UAV)-based intelligent detection method utilizing an improved Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture to address the inefficiency and inaccuracy inherent in manual wheat spike counting. We systematically collected a high-resolution image dataset (2000 [...] Read more.
This study presents an innovative unmanned aerial vehicle (UAV)-based intelligent detection method utilizing an improved Faster Region-based Convolutional Neural Network (Faster R-CNN) architecture to address the inefficiency and inaccuracy inherent in manual wheat spike counting. We systematically collected a high-resolution image dataset (2000 images, 4096 × 3072 pixels) covering key growth stages (heading, grain filling, and maturity) of winter wheat (Triticum aestivum L.) during 2022–2023 using a DJI M300 RTK equipped with multispectral sensors. The dataset encompasses diverse field scenarios under five fertilization treatments (organic-only, organic–inorganic 7:3 and 3:7 ratios, inorganic-only, and no fertilizer) and two irrigation regimes (full and deficit irrigation), ensuring representativeness and generalizability. For model development, we replaced conventional VGG16 with ResNet-50 as the backbone network, incorporating residual connections and channel attention mechanisms to achieve 92.1% mean average precision (mAP) while reducing parameters from 135 M to 77 M (43% decrease). The GFLOPS of the improved model has been reduced from 1.9 to 1.7, an decrease of 10.53%, and the computational efficiency of the model has been improved. Performance tests demonstrated a 15% reduction in missed detection rate compared to YOLOv8 in dense canopies, with spike count regression analysis yielding R2 = 0.88 (p < 0.05) against manual measurements and yield prediction errors below 10% for optimal treatments. To validate robustness, we established a dedicated 500-image test set (25% of total data) spanning density gradients (30–80 spikes/m2) and varying illumination conditions, maintaining >85% accuracy even under cloudy weather. Furthermore, by integrating spike recognition with agronomic parameters (e.g., grain weight), we developed a comprehensive yield estimation model achieving 93.5% accuracy under optimal water–fertilizer management (70% ETc irrigation with 3:7 organic–inorganic ratio). This work systematically addresses key technical challenges in automated spike detection through standardized data acquisition, lightweight model design, and field validation, offering significant practical value for smart agriculture development. Full article
(This article belongs to the Special Issue Plant Phenotyping and Machine Learning)
Show Figures

Figure 1

18 pages, 7011 KiB  
Article
Monitoring Chrysanthemum Cultivation Areas Using Remote Sensing Technology
by Yin Ye, Meng-Ting Wu, Chun-Juan Pu, Jing-Mei Chen, Zhi-Xian Jing, Ting-Ting Shi, Xiao-Bo Zhang and Hui Yan
Horticulturae 2025, 11(8), 933; https://doi.org/10.3390/horticulturae11080933 - 7 Aug 2025
Viewed by 233
Abstract
Chrysanthemum has a long history of medicinal use with rich germplasm resources and extensive cultivation. Traditional chrysanthemum cultivation involves complex patterns and long flowering periods, with the ongoing expansion of planting areas complicating statistical surveys. Currently, reliable, timely, and universally applicable standardized monitoring [...] Read more.
Chrysanthemum has a long history of medicinal use with rich germplasm resources and extensive cultivation. Traditional chrysanthemum cultivation involves complex patterns and long flowering periods, with the ongoing expansion of planting areas complicating statistical surveys. Currently, reliable, timely, and universally applicable standardized monitoring methods for chrysanthemum cultivation areas remain underdeveloped. This research employed 16 m resolution satellite imagery spanning 2021 to 2023 alongside 2 m resolution data acquired in 2022 to quantify chrysanthemum cultivation extent across Sheyang County, Jiangsu Province, China. After evaluating multiple classifiers, Maximum Likelihood Classification was selected as the optimal method. Subsequently, time-series-based post-classification processing was implemented: initial cultivation information extraction was performed through feature comparison, supervised classification, and temporal analysis. Accuracy validation via Overall Accuracy, Kappa coefficient, Producer’s Accuracy, and User’s Accuracy identified critical issues, followed by targeted refinement of spectrally confused features to obtain precise area estimates. The chrysanthemum cultivation area in 2022 was quantified as 46,950,343 m2 for 2 m resolution and 46,332,538 m2 for 16 m resolution. Finally, the conversion ratio characteristics between resolutions were analyzed, yielding adjusted results of 38,466,192 m2 for 2021 and 47,546,718 m2 for 2023, respectively. These outcomes demonstrate strong alignment with local agricultural statistics, confirming method viability for chrysanthemum cultivation area computation. Full article
(This article belongs to the Section Medicinals, Herbs, and Specialty Crops)
Show Figures

Figure 1

26 pages, 10480 KiB  
Article
Monitoring Chlorophyll Content of Brassica napus L. Based on UAV Multispectral and RGB Feature Fusion
by Yongqi Sun, Jiali Ma, Mengting Lyu, Jianxun Shen, Jianping Ying, Skhawat Ali, Basharat Ali, Wenqiang Lan, Yiwa Hu, Fei Liu, Weijun Zhou and Wenjian Song
Agronomy 2025, 15(8), 1900; https://doi.org/10.3390/agronomy15081900 - 7 Aug 2025
Viewed by 300
Abstract
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to [...] Read more.
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to evaluate six rapeseed cultivars chlorophyll content across mixed-growth stages, including seedling, bolting, and initial flowering stages. The ExG-ExR threshold segmentation was applied to remove background interference. Subsequently, color and spectral indices were extracted from segmented images and ranked according to their correlations with measured chlorophyll content. Partial Least Squares Regression (PLSR), Multiple Linear Regression (MLR), and Support Vector Regression (SVR) models were independently established using subsets of the top-ranked features. Model performance was assessed by comparing prediction accuracy (R2 and RMSE). Results demonstrated significant accuracy improvements following background removal, especially for the SVR model. Compared to data without background removal, accuracy increased notably with background removal by 8.0% (R2p improved from 0.683 to 0.763) for color indices and 3.1% (R2p from 0.835 to 0.866) for spectral indices. Additionally, stepwise fusion of spectral and color indices further improved prediction accuracy. Optimal results were obtained by fusing the top seven color features ranked by correlation with chlorophyll content, achieving an R2p of 0.878 and an RMSE of 52.187 μg/g. These findings highlight the effectiveness of background removal and feature fusion in enhancing chlorophyll prediction accuracy. Full article
Show Figures

Figure 1

25 pages, 29559 KiB  
Article
CFRANet: Cross-Modal Frequency-Responsive Attention Network for Thermal Power Plant Detection in Multispectral High-Resolution Remote Sensing Images
by Qinxue He, Bo Cheng, Xiaoping Zhang and Yaocan Gan
Remote Sens. 2025, 17(15), 2706; https://doi.org/10.3390/rs17152706 - 5 Aug 2025
Viewed by 265
Abstract
Thermal Power Plants (TPPs), as widely used industrial facilities for electricity generation, represent a key task in remote sensing image interpretation. However, detecting TPPs remains a challenging task due to their complex and irregular composition. Many traditional approaches focus on detecting compact, small-scale [...] Read more.
Thermal Power Plants (TPPs), as widely used industrial facilities for electricity generation, represent a key task in remote sensing image interpretation. However, detecting TPPs remains a challenging task due to their complex and irregular composition. Many traditional approaches focus on detecting compact, small-scale objects, while existing composite object detection methods are mostly part-based, limiting their ability to capture the structural and textural characteristics of composite targets like TPPs. Moreover, most of them rely on single-modality data, failing to fully exploit the rich information available in remote sensing imagery. To address these limitations, we propose a novel Cross-Modal Frequency-Responsive Attention Network (CFRANet). Specifically, the Modality-Aware Fusion Block (MAFB) facilitates the integration of multi-modal features, enhancing inter-modal interactions. Additionally, the Frequency-Responsive Attention (FRA) module leverages both spatial and localized dual-channel information and utilizes Fourier-based frequency decomposition to separately capture high- and low-frequency components, thereby improving the recognition of TPPs by learning both detailed textures and structural layouts. Experiments conducted on our newly proposed AIR-MTPP dataset demonstrate that CFRANet achieves state-of-the-art performance, with a mAP50 of 82.41%. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

28 pages, 1806 KiB  
Systematic Review
Systemic Review and Meta-Analysis: The Application of AI-Powered Drone Technology with Computer Vision and Deep Learning Networks in Waste Management
by Tyrone Bright, Sarp Adali and Cristina Trois
Drones 2025, 9(8), 550; https://doi.org/10.3390/drones9080550 - 5 Aug 2025
Viewed by 399
Abstract
As the generation of Municipal Solid Waste (MSW) has exponentially increased, this poses a challenge for waste managers, such as municipalities, to effectively control waste streams. If waste streams are not managed correctly, they negatively contribute to climate change, marine plastic pollution and [...] Read more.
As the generation of Municipal Solid Waste (MSW) has exponentially increased, this poses a challenge for waste managers, such as municipalities, to effectively control waste streams. If waste streams are not managed correctly, they negatively contribute to climate change, marine plastic pollution and human health effects. Therefore, waste streams need to be identified, categorised and valorised to ensure that the most effective waste management strategy is employed. Research suggests that a more efficient process of identifying and categorising waste at the source can achieve this. Therefore, the aim of the paper is to identify the state of research of AI-powered drones in identifying and categorising waste. This paper will conduct a systematic review and meta-analysis on the application of drone technology integrated with image sensing technology and deep learning methods for waste management. Different systems are explored, and a quantitative meta-analysis of their performance metrics (such as the F1 score) is conducted to determine the best integration of technology. Therefore, the research proposes designing and developing a hybrid deep learning model with integrated architecture (YOLO-Transformer model) that can capture Multispectral imagery data from drones for waste stream identification, categorisation and potential valorisation for waste managers in small-scale environments. Full article
Show Figures

Figure 1

22 pages, 8105 KiB  
Article
Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing
by Jie Han, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang and Jiaxiu Zou
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 - 1 Aug 2025
Viewed by 301
Abstract
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract [...] Read more.
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species. Full article
Show Figures

Graphical abstract

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 - 31 Jul 2025
Viewed by 319
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

29 pages, 5503 KiB  
Article
Feature Selection Framework for Improved UAV-Based Detection of Solenopsis invicta Mounds in Agricultural Landscapes
by Chun-Han Shih, Cheng-En Song, Su-Fen Wang and Chung-Chi Lin
Insects 2025, 16(8), 793; https://doi.org/10.3390/insects16080793 - 31 Jul 2025
Viewed by 367
Abstract
The red imported fire ant (RIFA; Solenopsis invicta) is an invasive species that severely threatens ecology, agriculture, and public health in Taiwan. In this study, the feasibility of applying multispectral imagery captured by unmanned aerial vehicles (UAVs) to detect red fire ant [...] Read more.
The red imported fire ant (RIFA; Solenopsis invicta) is an invasive species that severely threatens ecology, agriculture, and public health in Taiwan. In this study, the feasibility of applying multispectral imagery captured by unmanned aerial vehicles (UAVs) to detect red fire ant mounds was evaluated in Fenlin Township, Hualien, Taiwan. A DJI Phantom 4 multispectral drone collected reflectance in five bands (blue, green, red, red-edge, and near-infrared), derived indices (normalized difference vegetation index, NDVI, soil-adjusted vegetation index, SAVI, and photochemical pigment reflectance index, PPR), and textural features. According to analysis of variance F-scores and random forest recursive feature elimination, vegetation indices and spectral features (e.g., NDVI, NIR, SAVI, and PPR) were the most significant predictors of ecological characteristics such as vegetation density and soil visibility. Texture features exhibited moderate importance and the potential to capture intricate spatial patterns in nonlinear models. Despite limitations in the analytics, including trade-offs related to flight height and environmental variability, the study findings suggest that UAVs are an inexpensive, high-precision means of obtaining multispectral data for RIFA monitoring. These findings can be used to develop efficient mass-detection protocols for integrated pest control, with broader implications for invasive species monitoring. Full article
(This article belongs to the Special Issue Surveillance and Management of Invasive Insects)
Show Figures

Figure 1

Back to TopTop