Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline

Search Results (704)

Search Parameters:
Keywords = UAV multi-spectral images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 15315 KB  
Article
Machine and Deep Learning Framework for Sargassum Detection and Fractional Cover Estimation Using Multi-Sensor Satellite Imagery
by José Manuel Echevarría-Rubio, Guillermo Martínez-Flores and Rubén Antelmo Morales-Pérez
Data 2025, 10(11), 177; https://doi.org/10.3390/data10110177 (registering DOI) - 1 Nov 2025
Abstract
Over the past decade, recurring influxes of pelagic Sargassum have posed significant environmental and economic challenges in the Caribbean Sea. Effective monitoring is crucial for understanding bloom dynamics and mitigating their impacts. This study presents a comprehensive machine learning (ML) and deep learning [...] Read more.
Over the past decade, recurring influxes of pelagic Sargassum have posed significant environmental and economic challenges in the Caribbean Sea. Effective monitoring is crucial for understanding bloom dynamics and mitigating their impacts. This study presents a comprehensive machine learning (ML) and deep learning (DL) framework for detecting Sargassum and estimating its fractional cover using imagery from key satellite sensors: the Operational Land Imager (OLI) on Landsat-8 and the Multispectral Instrument (MSI) on Sentinel-2. A spectral library was constructed from five core spectral bands (Blue, Green, Red, Near-Infrared, and Short-Wave Infrared). It was used to train an ensemble of five diverse classifiers: Random Forest (RF), K-Nearest Neighbors (KNN), XGBoost (XGB), a Multi-Layer Perceptron (MLP), and a 1D Convolutional Neural Network (1D-CNN). All models achieved high classification performance on a held-out test set, with weighted F1-scores exceeding 0.976. The probabilistic outputs from these classifiers were then leveraged as a direct proxy for the sub-pixel fractional cover of Sargassum. Critically, an inter-algorithm agreement analysis revealed that detections on real-world imagery are typically either of very high (unanimous) or very low (contentious) confidence, highlighting the diagnostic power of the ensemble approach. The resulting framework provides a robust and quantitative pathway for generating confidence-aware estimates of Sargassum distribution. This work supports efforts to manage these harmful algal blooms by providing vital information on detection certainty, while underscoring the critical need to empirically validate fractional cover proxies against in situ or UAV measurements. Full article
(This article belongs to the Section Spatial Data Science and Digital Earth)
Show Figures

Figure 1

25 pages, 2287 KB  
Article
Identification of Cotton Leaf Mite Damage Stages Using UAV Multispectral Images and a Stacked Ensemble Method
by Shifeng Fan, Qiang He, Yongqin Chen, Xin Xu, Wei Guo, Yanhui Lu, Jie Liu and Hongbo Qiao
Agriculture 2025, 15(21), 2277; https://doi.org/10.3390/agriculture15212277 (registering DOI) - 31 Oct 2025
Abstract
Cotton leaf mites are pests that cause irreparable damage to cotton and pose a severe threat to the cotton yield, and the application of unmanned aerial vehicles (UAVs) to monitor the incidence of cotton leaf mites across a vast region is important for [...] Read more.
Cotton leaf mites are pests that cause irreparable damage to cotton and pose a severe threat to the cotton yield, and the application of unmanned aerial vehicles (UAVs) to monitor the incidence of cotton leaf mites across a vast region is important for cotton leaf mite prevention. In this work, 52 vegetation indices were calculated based on the original five bands of spliced UAV multispectral images, and six featured indices were screened using Shapley value theory. To classify and identify cotton leaf mite infestation classes, seven machine learning classification models were used: random forest (RF), support vector machine (SVM), extreme gradient boosting (XGB), light gradient boosting machine (LGBM), K-Nearest Neighbors (KNN), decision tree (DT), and gradient boosting decision tree (GBDT) models. The base model and metamodel used in stacked models were built based on a combination of four models, namely, the XGB, GBDT, KNN, and DT models, which were selected in accordance with the heterogeneity principle. The experimental results showed that the stacked classification models based on the XGB, KNN base model, and DT metamodel were the best performers, outperforming other integrated and single individual models, with an overall accuracy of 85.7% (precision: 93.3%, recall: 72.6%, and F1-score: 78.2% in the macro_avg case; precision: 88.6%, recall: 85.7%, and F1 score: 84.7% in the weighted_avg case). This approach provides support for using UAVs to monitor the cotton leaf mite prevalence over vast regions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
25 pages, 16046 KB  
Article
UAV-Based Multimodal Monitoring of Tea Anthracnose with Temporal Standardization
by Qimeng Yu, Jingcheng Zhang, Lin Yuan, Xin Li, Fanguo Zeng, Ke Xu, Wenjiang Huang and Zhongting Shen
Agriculture 2025, 15(21), 2270; https://doi.org/10.3390/agriculture15212270 (registering DOI) - 31 Oct 2025
Viewed by 132
Abstract
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and [...] Read more.
Tea Anthracnose (TA), caused by fungi of the genus Colletotrichum, is one of the major threats to global tea production. UAV remote sensing has been explored for non-destructive and high-efficiency monitoring of diseases in tea plantations. However, variations in illumination, background, and meteorological factors undermine the stability of cross-temporal data. Data processing and modeling complexity further limits model generalizability and practical application. This study introduced a cross-temporal, generalizable disease monitoring approach based on UAV multimodal data coupled with relative-difference standardization. In an experimental tea garden, we collected multispectral, thermal infrared, and RGB images and extracted four classes of features: spectral (Sp), thermal (Th), texture (Te), and color (Co). The Normalized Difference Vegetation Index (NDVI) was used to identify reference areas and standardize features, which significantly reduced the relative differences in cross-temporal features. Additionally, we developed a vegetation–soil relative temperature (VSRT) index, which exhibits higher temporal-phase consistency than the conventional normalized relative canopy temperature (NRCT). A multimodal optimal feature set was constructed through sensitivity analysis based on the four feature categories. For different modality combinations (single and fused), three machine learning algorithms, K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP), were selected to evaluate disease classification performance due to their low computational burden and ease of deployment. Results indicate that the “Sp + Th” combination achieved the highest accuracy (95.51%), with KNN (95.51%) outperforming SVM (94.23%) and MLP (92.95%). Moreover, under the optimal feature combination and KNN algorithm, the model achieved high generalizability (86.41%) on independent temporal data. This study demonstrates that fusing spectral and thermal features with temporal standardization, combined with the simple and effective KNN algorithm, achieves accurate and robust tea anthracnose monitoring, providing a practical solution for efficient and generalizable disease management in tea plantations. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

19 pages, 577 KB  
Article
UAV Multispectral Imaging for Multi-Year Assessment of Crop Rotation Effects on Winter Rye
by Mindaugas Dorelis, Viktorija Vaštakaitė-Kairienė and Vaclovas Bogužas
Appl. Sci. 2025, 15(21), 11491; https://doi.org/10.3390/app152111491 - 28 Oct 2025
Viewed by 192
Abstract
Crop rotation is a cornerstone of sustainable agronomy, whereas continuous monoculture can degrade soil fertility and crop vigor. A three-year field experiment (2023–2025) in Lithuania compared winter rye grown in a long-term field experiment of continuous monoculture (with and without fertilizer/herbicide inputs) with [...] Read more.
Crop rotation is a cornerstone of sustainable agronomy, whereas continuous monoculture can degrade soil fertility and crop vigor. A three-year field experiment (2023–2025) in Lithuania compared winter rye grown in a long-term field experiment of continuous monoculture (with and without fertilizer/herbicide inputs) with five diversified rotation treatments that included manure, forage, or cover crop phases. Unmanned aerial vehicle (UAV) multispectral imaging was used to monitor crop health via the Normalized Difference Vegetation Index (NDVI, an indicator of plant vigor). NDVI measurements at three key developmental stages (flowering to ripening, BBCH 61–89) showed that diversified rotations consistently achieved higher NDVI than monoculture, indicating more robust crop growth. Notably, the most intensive and row-crop rotations had the highest canopy vigor, whereas continuous monocultures had the lowest. An anomalous weather year (2024) temporarily reduced NDVI differences, but rotation benefits re-emerged in 2025. Overall, UAV-based NDVI effectively captured rotation-induced differences in rye canopy vigor, highlighting the agronomic advantages of diversified cropping systems and the value of UAV remote sensing for crop monitoring. Full article
(This article belongs to the Special Issue Effects of the Soil Environment on Plant Growth)
Show Figures

Figure 1

23 pages, 5266 KB  
Article
Satellite-Based Assessment of Intertidal Vegetation Dynamics in Continental Portugal with Sentinel-2 Data
by Ingrid Cardenas, Manuel Meyer, José Alberto Gonçalves, Isabel Iglesias and Ana Bio
Remote Sens. 2025, 17(21), 3540; https://doi.org/10.3390/rs17213540 - 26 Oct 2025
Viewed by 257
Abstract
Vegetated intertidal ecosystems, such as seagrass meadows, salt marshes, and macroalgal beds, are vital for biodiversity, coastal protection, and climate regulation; however, they remain highly vulnerable to anthropogenic and climate-induced stressors. This study aims to assess interannual changes in intertidal vegetation cover along [...] Read more.
Vegetated intertidal ecosystems, such as seagrass meadows, salt marshes, and macroalgal beds, are vital for biodiversity, coastal protection, and climate regulation; however, they remain highly vulnerable to anthropogenic and climate-induced stressors. This study aims to assess interannual changes in intertidal vegetation cover along the Portuguese mainland coast from 2015 to 2024 using Sentinel-2 satellite imagery calibrated with high-resolution multispectral unoccupied aerial vehicle (UAV) data, to determine the most accurate index for mapping intertidal vegetation. Among the 16 indices tested, the Atmospherically Resilient Vegetation Index (ARVI) showed the highest predictive performance. Based on a model relating intertidal vegetation cover to this index, an ARVI value greater than or equal to 0.214 was established to estimate the area covered with intertidal vegetation. Applying this threshold to time-series data revealed considerable spatial and temporal variability in vegetation cover, with estuarine systems such as the Ria de Aveiro and the Ria Formosa showing the greatest extents and marked fluctuations. At the national level, no consistent overall trend was identified for the study period. Despite limitations related to satellite image resolution and single-site validation, the results demonstrate the feasibility and utility of combining UAV data and satellite indices for long-term, large-scale monitoring of intertidal vegetation. Full article
Show Figures

Graphical abstract

27 pages, 29561 KB  
Article
UAV Remote Sensing for Integrated Monitoring and Model Optimization of Citrus Leaf Water Content and Chlorophyll
by Weiqi Zhang, Shijiang Zhu, Yun Zhong, Hu Li, Aihua Sun, Yanqun Zhang and Jian Zeng
Agriculture 2025, 15(21), 2197; https://doi.org/10.3390/agriculture15212197 - 23 Oct 2025
Viewed by 267
Abstract
Leaf water content (LWC) and chlorophyll content (CHL) are pivotal physiological indicators for assessing citrus growth and stress responses. However, conventional measurement techniques—such as fresh-to-dry weight ratio and spectrophotometry—are destructive, time-consuming, and limited in spatial and temporal resolution, making them unsuitable for large-scale [...] Read more.
Leaf water content (LWC) and chlorophyll content (CHL) are pivotal physiological indicators for assessing citrus growth and stress responses. However, conventional measurement techniques—such as fresh-to-dry weight ratio and spectrophotometry—are destructive, time-consuming, and limited in spatial and temporal resolution, making them unsuitable for large-scale monitoring. To achieve efficient large-scale monitoring, this study proposes a synergistic inversion framework integrating UAV multispectral remote sensing with intelligent optimization algorithms. Field experiments during the 2024 growing season (April–October) in western Hubei collected 263 ground measurements paired with multispectral images. Sensitive spectral bands and vegetation indices for LWC and CHL were identified through Pearson correlation analysis. Five modeling approaches—Partial Least Squares Regression (PLS); Extreme Learning Machine (ELM); and ELM optimized by Particle Swarm Optimization (PSO-ELM), Artificial Hummingbird Algorithm (AHA-ELM), and Grey Wolf Optimizer (GWO-ELM)—were evaluated. Results demonstrated that (1) VI-based models outperformed raw spectral band models; (2) the PSO-ELM synergistic inversion model using sensitive VIs achieved optimal accuracy (validation R2: 0.790 for LWC, 0.672 for CHL), surpassing PLS by 15.16% (LWC) and 53.78% (CHL), and standard ELM by 20.80% (LWC) and 25.84% (CHL), respectively; and (3) AHA-ELM and GWO-ELM also showed significant enhancements. This research provides a robust technical foundation for precision management of citrus orchards in drought-prone regions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

29 pages, 10201 KB  
Article
Hybrid Methodological Evaluation Using UAV/Satellite Information for the Monitoring of Super-Intensive Olive Groves
by Esther Alfonso, Serafín López-Cuervo, Julián Aguirre, Enrique Pérez-Martín and Iñigo Molina
Appl. Sci. 2025, 15(20), 11171; https://doi.org/10.3390/app152011171 - 18 Oct 2025
Viewed by 355
Abstract
Advances in Earth observation technology using multispectral imagery from satellite Earth observation systems and sensors mounted on unmanned aerial vehicles (UAVs) are enabling more accurate crop monitoring. These images, once processed, facilitate the analysis of crop health by enabling the study of crop [...] Read more.
Advances in Earth observation technology using multispectral imagery from satellite Earth observation systems and sensors mounted on unmanned aerial vehicles (UAVs) are enabling more accurate crop monitoring. These images, once processed, facilitate the analysis of crop health by enabling the study of crop vigour, the calculation of biomass indices, and the continuous temporal monitoring using vegetation indices (VIs). These indicators allow for the identification of diseases, pests, or water stress, among others. This study compares images acquired with the Altum PT sensor (UAV) and Super Dove (satellite) to evaluate their ability to detect specific problems in super-intensive olive groves at two critical times: January, during pruning, and April, at the beginning of fruit development. Four different VIs were used, and multispectral maps were generated for each: the Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), the Normalized Difference Red Edge Index (NDRE) and the Leaf Chlorophyll Index (LCI). Data for each plant (n = 11,104) were obtained for analysis across all dates and sensors. A combined methodology (Spearman’s correlation coefficient, Student’s t-test and decision trees) was used to validate the behaviour of the variables and propose predictive models. The results showed significant differences between the sensors, with a common trend in spatial patterns and a correlation range between 0.45 and 0.68. Integrating both technologies enables multiscale assessment, optimizing agronomic management and supporting more sustainable precision agriculture. Full article
Show Figures

Figure 1

22 pages, 2027 KB  
Article
Agri-DSSA: A Dual Self-Supervised Attention Framework for Multisource Crop Health Analysis Using Hyperspectral and Image-Based Benchmarks
by Fatema A. Albalooshi
AgriEngineering 2025, 7(10), 350; https://doi.org/10.3390/agriengineering7100350 - 17 Oct 2025
Viewed by 338
Abstract
Recent advances in hyperspectral imaging (HSI) and multimodal deep learning have opened new opportunities for crop health analysis; however, most existing models remain limited by dataset scope, lack of interpretability, and weak cross-domain generalization. To overcome these limitations, this study introduces Agri-DSSA, a [...] Read more.
Recent advances in hyperspectral imaging (HSI) and multimodal deep learning have opened new opportunities for crop health analysis; however, most existing models remain limited by dataset scope, lack of interpretability, and weak cross-domain generalization. To overcome these limitations, this study introduces Agri-DSSA, a novel Dual Self-Supervised Attention (DSSA) framework that simultaneously models spectral and spatial dependencies through two complementary self-attention branches. The proposed architecture enables robust and interpretable feature learning across heterogeneous data sources, facilitating the estimation of spectral proxies of chlorophyll content, plant vigor, and disease stress indicators rather than direct physiological measurements. Experiments were performed on seven publicly available benchmark datasets encompassing diverse spectral and visual domains: three hyperspectral datasets (Indian Pines with 16 classes and 10,366 labeled samples; Pavia University with 9 classes and 42,776 samples; and Kennedy Space Center with 13 classes and 5211 samples), two plant disease datasets (PlantVillage with 54,000 labeled leaf images covering 38 diseases across 14 crop species, and the New Plant Diseases dataset with over 30,000 field images captured under natural conditions), and two chlorophyll content datasets (the Global Leaf Chlorophyll Content Dataset (GLCC), derived from MERIS and OLCI satellite data between 2003–2020, and the Leaf Chlorophyll Content Dataset for Crops, which includes paired spectrophotometric and multispectral measurements collected from multiple crop species). To ensure statistical rigor and spatial independence, a block-based spatial cross-validation scheme was employed across five independent runs with fixed random seeds. Model performance was evaluated using R2, RMSE, F1-score, AUC-ROC, and AUC-PR, each reported as mean ± standard deviation with 95% confidence intervals. Results show that Agri-DSSA consistently outperforms baseline models (PLSR, RF, 3D-CNN, and HybridSN), achieving up to R2=0.86 for chlorophyll content estimation and F1-scores above 0.95 for plant disease detection. The attention distributions highlight physiologically meaningful spectral regions (550–710 nm) associated with chlorophyll absorption, confirming the interpretability of the model’s learned representations. This study serves as a methodological foundation for UAV-based and field-deployable crop monitoring systems. By unifying hyperspectral, chlorophyll, and visual disease datasets, Agri-DSSA provides an interpretable and generalizable framework for proxy-based vegetation stress estimation. Future work will extend the model to real UAV campaigns and in-field spectrophotometric validation to achieve full agronomic reliability. Full article
Show Figures

Figure 1

24 pages, 6738 KB  
Article
SVMobileNetV2: A Hybrid and Hierarchical CNN-SVM Network Architecture Utilising UAV-Based Multispectral Images and IoT Nodes for the Precise Classification of Crop Diseases
by Rafael Linero-Ramos, Carlos Parra-Rodríguez and Mario Gongora
AgriEngineering 2025, 7(10), 341; https://doi.org/10.3390/agriengineering7100341 - 10 Oct 2025
Viewed by 470
Abstract
This paper presents a novel hybrid and hierarchical architecture of a Convolutional Neural Network (CNN), based on MobileNetV2 and Support Vector Machines (SVM) for the classification of crop diseases (SVMobileNetV2). The system feeds from multispectral images captured by Unmanned Aerial Vehicles (UAVs) alongside [...] Read more.
This paper presents a novel hybrid and hierarchical architecture of a Convolutional Neural Network (CNN), based on MobileNetV2 and Support Vector Machines (SVM) for the classification of crop diseases (SVMobileNetV2). The system feeds from multispectral images captured by Unmanned Aerial Vehicles (UAVs) alongside data from IoT nodes. The primary objective is to improve classification performance in terms of both accuracy and precision. This is achieved by integrating contemporary Deep Learning techniques, specifically different CNN models, a prevalent type of artificial neural network composed of multiple interconnected layers, tailored for the analysis of agricultural imagery. The initial layers are responsible for identifying basic visual features such as edges and contours, while deeper layers progressively extract more abstract and complex patterns, enabling the recognition of intricate shapes. In this study, different datasets of tropical crop images, in this case banana crops, were constructed to evaluate the performance and accuracy of CNNs in detecting diseases in the crops, supported by transfer learning. For this, multispectral images are used to create false-color images to discriminate disease through spectra related to the blue, green and red colors in addition to red edge and near-infrared. Moreover, we used IoT nodes to include environmental data related to the temperature and humidity of the environment and the soil. Machine Learning models were evaluated and fine-tuned using standard evaluation metrics. For classification, we used fundamental metrics such as accuracy, precision, and the confusion matrix; in this study was obtained a performance of up to 86.5% using current deep learning models and up to 98.5% accuracy using the proposed hybrid and hierarchical architecture (SVMobileNetV2). This represents a new paradigm to significantly improve classification using the proposed hybrid CNN-SVM architecture and UAV-based multispectral images. Full article
Show Figures

Figure 1

18 pages, 7359 KB  
Article
Estimating Field-Scale Soil Organic Matter in Agricultural Soils Using UAV Hyperspectral Imagery
by Chenzhen Xia and Yue Zhang
AgriEngineering 2025, 7(10), 339; https://doi.org/10.3390/agriengineering7100339 - 10 Oct 2025
Viewed by 346
Abstract
Fast and precise monitoring of soil organic matter (SOM) during maize growth periods is crucial for real-time assessment of soil quality. However, the big challenge we usually face is that many agricultural soils are covered by crops or snow, and the bare soil [...] Read more.
Fast and precise monitoring of soil organic matter (SOM) during maize growth periods is crucial for real-time assessment of soil quality. However, the big challenge we usually face is that many agricultural soils are covered by crops or snow, and the bare soil period is short, which makes reliable SOM prediction complex and difficult. In this study, an unmanned aerial vehicle (UAV) was utilized to acquire multi-temporal hyperspectral images of maize across the key growth stages at the field scale. The auxiliary predictors, such as spectral indices (I), field management (F), plant characteristics (V), and soil properties (S), were also introduced. We used stepwise multiple linear regression, partial least squares regression (PLSR), random forest (RF) regression, and XGBoost regression models for SOM prediction, and the results show the following: (1) Multi-temporal remote sensing information combined with multi-source predictors and their combinations can accurately estimate SOM content across the key growth periods. The best-fitting model depended on the types of models and predictors selected. With the I + F + V + S predictor combination, the best SOM prediction was achieved by using the XGBoost model (R2 = 0.72, RMSE = 0.27%, nRMSE = 0.16%) in the R3 stage. (2) The relative importance of soil properties, spectral indices, plant characteristics, and field management was 55.36%, 26.09%, 9.69%, and 8.86%, respectively, for the multiple periods combination. Here, this approach can overcome the impact of the crop cover condition by using multi-temporal UAV hyperspectral images combined with valuable auxiliary variables. This study can also improve the field-scale farmland soil properties assessment and mapping accuracy, which will aid in soil carbon sequestration and soil management. Full article
(This article belongs to the Section Remote Sensing in Agriculture)
Show Figures

Figure 1

22 pages, 8737 KB  
Article
UAV-Based Multispectral Imagery for Area-Wide Sustainable Tree Risk Management
by Kinga Mazurek, Łukasz Zając, Marzena Suchocka, Tomasz Jelonek, Adam Juźwiak and Marcin Kubus
Sustainability 2025, 17(19), 8908; https://doi.org/10.3390/su17198908 - 7 Oct 2025
Viewed by 699
Abstract
The responsibility for risk assessment and user safety in forested and recreational areas lies with the property owner. This study shows that unmanned aerial vehicles (UAVs), combined with remote sensing and GIS analysis, effectively support the identification of high-risk trees, particularly those with [...] Read more.
The responsibility for risk assessment and user safety in forested and recreational areas lies with the property owner. This study shows that unmanned aerial vehicles (UAVs), combined with remote sensing and GIS analysis, effectively support the identification of high-risk trees, particularly those with reduced structural stability. UAV-based surveys successfully detect 78% of dead or declining trees identified during ground inspections, while significantly reducing labor and enabling large-area assessments within a short timeframe. The study covered an area of 6.69 ha with 51 reference trees assessed on the ground. Although the multispectral camera also recorded the red-edge band, it was not included in the present analysis. Compared to traditional ground-based surveys, the UAV-based approach reduced fieldwork time by approx. 20–30% and labor costs by approx. 15–20%. Orthomosaics generated from images captured by commercial multispectral drones (e.g., DJI Mavic 3 Multispectral) provide essential information on tree condition, especially mortality indicators. UAV data collection is fast and relatively low-cost but requires equipment capable of capturing high-resolution imagery in specific spectral bands, particularly near-infrared (NIR). The findings suggest that UAV-based monitoring can enhance the efficiency of large-scale inspections. However, ground-based verification remains necessary in high-traffic areas where safety is critical. Integrating UAV technologies with GIS supports the development of risk management strategies aligned with the principles of precision forestry, enabling sustainable, more proactive and efficient monitoring of tree-related hazards. Full article
(This article belongs to the Section Sustainable Forestry)
Show Figures

Figure 1

28 pages, 5791 KB  
Article
Tree Health Assessment Using Mask R-CNN on UAV Multispectral Imagery over Apple Orchards
by Mohadeseh Kaviani, Brigitte Leblon, Thangarajah Akilan, Dzhamal Amishev, Armand LaRocque and Ata Haddadi
Remote Sens. 2025, 17(19), 3369; https://doi.org/10.3390/rs17193369 - 6 Oct 2025
Viewed by 580
Abstract
Accurate tree health monitoring in orchards is essential for optimal orchard production. This study investigates the efficacy of a deep learning-based object detection single-step method for detecting tree health on multispectral UAV imagery. A modified Mask R-CNN framework is employed with four different [...] Read more.
Accurate tree health monitoring in orchards is essential for optimal orchard production. This study investigates the efficacy of a deep learning-based object detection single-step method for detecting tree health on multispectral UAV imagery. A modified Mask R-CNN framework is employed with four different backbones—ResNet-50, ResNet-101, ResNeXt-101, and Swin Transformer—on three image combinations: (1) RGB images, (2) 5-band multispectral images comprising RGB, Red-Edge, and Near-Infrared (NIR) bands, and (3) three principal components (3PCs) computed from the reflectance of the five spectral bands and twelve associated vegetation index images. The Mask R-CNN, having a ResNeXt-101 backbone, and applied to the 5-band multispectral images, consistently outperforms other configurations, with an F1-score of 85.68% and a mean Intersection over Union (mIoU) of 92.85%. To address the class imbalance, class weighting and focal loss were integrated into the model, yielding improvements in the detection of the minority class, i.e., the unhealthy trees. The tested method has the advantage of allowing the detection of unhealthy trees over UAV images using a single-step approach. Full article
Show Figures

Figure 1

28 pages, 1927 KB  
Systematic Review
Drone Imaging and Sensors for Situational Awareness in Hazardous Environments: A Systematic Review
by Siripan Rattanaamporn, Asanka Perera, Andy Nguyen, Thanh Binh Ngo and Javaan Chahl
J. Sens. Actuator Netw. 2025, 14(5), 98; https://doi.org/10.3390/jsan14050098 - 29 Sep 2025
Viewed by 1283
Abstract
Situation awareness is essential for ensuring safety in hazardous environments, where timely and accurate information is critical for decision-making. Unmanned Aerial Vehicles (UAVs) have emerged as valuable tools in enhancing situation awareness by providing real-time data and monitoring capabilities in high-risk areas. This [...] Read more.
Situation awareness is essential for ensuring safety in hazardous environments, where timely and accurate information is critical for decision-making. Unmanned Aerial Vehicles (UAVs) have emerged as valuable tools in enhancing situation awareness by providing real-time data and monitoring capabilities in high-risk areas. This study explores the integration of advanced technologies, focusing on imaging and sensor technologies such as thermal, spectral, and multispectral cameras, deployed in critical zones. By merging these technologies into UAV platforms, responders gain access to essential real-time information while reducing human exposure to hazardous conditions. This study presents case studies and practical applications, highlighting the effectiveness of these technologies in a range of hazardous situations. Full article
(This article belongs to the Special Issue AI-Assisted Machine-Environment Interaction)
Show Figures

Figure 1

37 pages, 2297 KB  
Systematic Review
Search, Detect, Recover: A Systematic Review of UAV-Based Remote Sensing Approaches for the Location of Human Remains and Clandestine Graves
by Cherene de Bruyn, Komang Ralebitso-Senior, Kirstie Scott, Heather Panter and Frederic Bezombes
Drones 2025, 9(10), 674; https://doi.org/10.3390/drones9100674 - 26 Sep 2025
Viewed by 1426
Abstract
Several approaches are currently being used by law enforcement to locate the remains of victims. Yet, traditional methods are invasive and time-consuming. Unmanned Aerial Vehicle (UAV)-based remote sensing has emerged as a potential tool to support the location of human remains and clandestine [...] Read more.
Several approaches are currently being used by law enforcement to locate the remains of victims. Yet, traditional methods are invasive and time-consuming. Unmanned Aerial Vehicle (UAV)-based remote sensing has emerged as a potential tool to support the location of human remains and clandestine graves. While offering a non-invasive and low-cost alternative, UAV-based remote sensing needs to be tested and validated for forensic case work. To assess current knowledge, a systematic review of 19 peer-reviewed articles from four databases was conducted, focusing specifically on UAV-based remote sensing for human remains and clandestine grave location. The findings indicate that different sensors (colour, thermal, and multispectral cameras), were tested across a range of burial conditions and models (human and mammalian). While UAVs with imaging sensors can locate graves and decomposition-related anomalies, experimental designs from the reviewed studies lacked robustness in terms of replication and consistency across models. Trends also highlight the potential of automated detection of anomalies over manual inspection, potentially leading to improved predictive modelling. Overall, UAV-based remote sensing shows considerable promise for enhancing the efficiency of human remains and clandestine grave location, but methodological limitations must be addressed to ensure findings are relevant to real-world forensic cases. Full article
Show Figures

Figure 1

26 pages, 12387 KB  
Article
Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model
by Xirui Xu, Ke Nie, Sanling Yuan, Wei Fan, Yanan Lu and Fei Wang
Fishes 2025, 10(10), 477; https://doi.org/10.3390/fishes10100477 - 24 Sep 2025
Viewed by 433
Abstract
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model [...] Read more.
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model (SAM) and multi-source high-resolution remote sensing image data, is proposed for high-precision aquaculture facility extraction and overcomes the problems of low efficiency and limited accuracy in traditional manual inspection methods. The research method includes systematic optimization of SAM segmentation parameters for different data sources and strict evaluation of model performance at multiple spatial resolutions. Additionally, the impact of different spectral band combinations on the segmentation effect is systematically analyzed. Experimental results demonstrate a significant correlation between resolution and accuracy, with UAV-derived imagery achieving exceptional segmentation accuracy (97.71%), followed by Jilin-1 (91.64%) and Sentinel-2 (72.93%) data. Notably, the NIR-Blue-Red band combination exhibited superior performance in delineating aquaculture infrastructure, suggesting its optimal utility for such applications. A robust and scalable solution for automatically extracting facilities is established, which offers significant insights for extending SAM’s capabilities to broader remote sensing applications within marine resource assessment domains. Full article
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)
Show Figures

Graphical abstract

Back to TopTop