Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (455)

Search Parameters:
Keywords = near infrared imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 9985 KB  
Article
A Comparative Analysis of Multi-Spectral and RGB-Acquired UAV Data for Cropland Mapping in Smallholder Farms
by Evania Chetty, Maqsooda Mahomed and Shaeden Gokool
Drones 2026, 10(1), 72; https://doi.org/10.3390/drones10010072 - 21 Jan 2026
Viewed by 96
Abstract
Accurate cropland classification within smallholder farming systems is essential for effective land management, efficient resource allocation, and informed agricultural decision-making. This study evaluates cropland classification performance using Red, Green, Blue (RGB) and multi-spectral (blue, green, red, red-edge, near-infrared) unmanned aerial vehicle (UAV) imagery. [...] Read more.
Accurate cropland classification within smallholder farming systems is essential for effective land management, efficient resource allocation, and informed agricultural decision-making. This study evaluates cropland classification performance using Red, Green, Blue (RGB) and multi-spectral (blue, green, red, red-edge, near-infrared) unmanned aerial vehicle (UAV) imagery. Both datasets were derived from imagery acquired using a MicaSense Altum sensor mounted on a DJI Matrice 300 UAV. Cropland classification was performed using machine learning algorithms implemented within the Google Earth Engine (GEE) platform, applying both a non-binary classification of five land cover classes and a binary classification within a probabilistic framework to distinguishing cropland from non-cropland areas. The results indicate that multi-spectral imagery achieved higher classification accuracy than RGB imagery for non-binary classification, with overall accuracies of 75% and 68%, respectively. For binary cropland classification, RGB imagery achieved an area under the receiver operating characteristic curve (AUC–ROC) of 0.75, compared to 0.77 for multi-spectral imagery. These findings suggest that, while multi-spectral data provides improved classification performance, RGB imagery can achieve comparable accuracy for fundamental cropland delineation. This study contributes baseline evidence on the relative performance of RGB and multi-spectral UAV imagery for cropland mapping in heterogeneous smallholder farming landscapes and supports further investigation of RGB-based approaches in resource-constrained agricultural contexts. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture—2nd Edition)
Show Figures

Figure 1

23 pages, 3010 KB  
Article
Monitoring Maize Phenology Using Multi-Source Data by Integrating Convolutional Neural Networks and Transformers
by Yugeng Guo, Wenzhi Zeng, Haoze Zhang, Jinhan Shao, Yi Liu and Chang Ao
Remote Sens. 2026, 18(2), 356; https://doi.org/10.3390/rs18020356 - 21 Jan 2026
Viewed by 97
Abstract
Effective monitoring of maize phenology under stress conditions is crucial for optimizing agricultural management and mitigating yield losses. Crop prediction models constructed from Convolutional Neural Network (CNN) have been widely applied. However, CNNs often struggle to capture long-range temporal dependencies in phenological data, [...] Read more.
Effective monitoring of maize phenology under stress conditions is crucial for optimizing agricultural management and mitigating yield losses. Crop prediction models constructed from Convolutional Neural Network (CNN) have been widely applied. However, CNNs often struggle to capture long-range temporal dependencies in phenological data, which are crucial for modeling seasonal and cyclic patterns. The Transformer model complements this by leveraging self-attention mechanisms to effectively handle global contexts and extended sequences in phenology-related tasks. The Transformer model has the global understanding ability that CNN does not have due to its multi-head attention. This study, proposes a synergistic framework, in combining CNN with Transformer model to realize global-local feature synergy using two models, proposes an innovative phenological monitoring model utilizing near-ground remote sensing technology. High-resolution imagery of maize fields was collected using unmanned aerial vehicles (UAVs) equipped with multispectral and thermal infrared cameras. By integrating this data with CNN and Transformer architectures, the proposed model enables accurate inversion and quantitative analysis of maize phenological traits. In the experiment, a network was constructed adopting multispectral and thermal infrared images from maize fields, and the model was validated using the collected experimental data. The results showed that the integration of multispectral imagery and accumulated temperature achieved an accuracy of 92.9%, while the inclusion of thermal infrared imagery further improved the accuracy to 97.5%. This study highlights the potential of UAV-based remote sensing, combined with CNN and Transformer as a transformative approach for precision agriculture. Full article
Show Figures

Figure 1

25 pages, 4670 KB  
Article
An Efficient Remote Sensing Index for Soybean Identification: Enhanced Chlorophyll Index (NRLI)
by Dongmei Lyu, Chenlan Lai, Bingxue Zhu, Zhijun Zhen and Kaishan Song
Remote Sens. 2026, 18(2), 278; https://doi.org/10.3390/rs18020278 - 14 Jan 2026
Viewed by 149
Abstract
Soybean is a key global crop for food and oil production, playing a vital role in ensuring food security and supplying plant-based proteins and oils. Accurate information on soybean distribution is essential for yield forecasting, agricultural management, and policymaking. In this study, we [...] Read more.
Soybean is a key global crop for food and oil production, playing a vital role in ensuring food security and supplying plant-based proteins and oils. Accurate information on soybean distribution is essential for yield forecasting, agricultural management, and policymaking. In this study, we developed an Enhanced Chlorophyll Index (NRLI) to improve the separability between soybean and maize—two spectrally similar crops that often confound traditional vegetation indices. The proposed NRLI integrates red-edge, near-infrared, and green spectral information, effectively capturing variations in chlorophyll and canopy water content during key phenological stages, particularly from flowering to pod setting and maturity. Building upon this foundation, we further introduce a pixel-wise compositing strategy based on the peak phase of NRLI to enhance the temporal adaptability and spectral discriminability in crop classification. Unlike conventional approaches that rely on imagery from fixed dates, this strategy dynamically analyzes annual time-series data, enabling phenology-adaptive alignment at the pixel level. Comparative analysis reveals that NRLI consistently outperforms existing vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Greenness and Water Content Composite Index (GWCCI), across representative soybean-producing regions in multiple countries. It improves overall accuracy (OA) by approximately 10–20 percentage points, achieving accuracy rates exceeding 90% in large, contiguous cultivation areas. To further validate the robustness of the proposed index, benchmark comparisons were conducted against the Random Forest (RF) machine learning algorithm. The results demonstrated that the single-index NRLI approach achieved competitive performance, comparable to the multi-feature RF model, with accuracy differences generally within 1–2%. In some regions, NRLI even outperformed RF. This finding highlights NRLI as a computationally efficient alternative to complex machine learning models without compromising mapping precision. This study provides a robust, scalable, and transferable single-index approach for large-scale soybean mapping and monitoring using remote sensing. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Smart Agriculture and Digital Twins)
Show Figures

Graphical abstract

23 pages, 4735 KB  
Article
Rice Yield Prediction Model at Pixel Level Using Machine Learning and Multi-Temporal Sentinel-2 Data in Valencia, Spain
by Rubén Simeón, Alba Agenjos-Moreno, Constanza Rubio, Antonio Uris and Alberto San Bautista
Agriculture 2026, 16(2), 201; https://doi.org/10.3390/agriculture16020201 - 13 Jan 2026
Viewed by 201
Abstract
Rice yield prediction at high spatial resolution is essential to support precision management and sustainable intensification in irrigated systems. While many remote sensing studies provide yield estimates at the field scale, pixel-level predictions are required to characterize within-field variability. This study assesses the [...] Read more.
Rice yield prediction at high spatial resolution is essential to support precision management and sustainable intensification in irrigated systems. While many remote sensing studies provide yield estimates at the field scale, pixel-level predictions are required to characterize within-field variability. This study assesses the potential of multitemporal Sentinel-2 imagery and machine learning to estimate rice yield at pixel level in the Albufera rice area (Valencia, Spain). Yield data from combine harvester maps were collected for ‘JSendra’ and ‘Bomba’ Japonica varieties over five growing seasons (2020–2024) and linked to 10 m Sentinel-2 bands in the visible, near-infrared (NIR) and short-wave infrared (SWIR) regions. Random Forest (RF) and XGBoost (XGB) models were trained with 2020–2023 data and independently validated in 2024. XGB systematically outperformed RF, achieving at 110 and 130 DAS (days after showing), R2 values of 0.74 and 0.85 and RMSE values of 0.63 and 0.28 t·ha−1 for ‘JSendra’ and ‘Bomba’. Prediction accuracy increased as the season progressed, and models using all spectral bands clearly outperformed configurations based only on spectral indices, confirming the dominant contribution of NIR reflectance. Spatial error analysis revealed errors at field edges and headlands, while central pixels were more accurately predicted. Overall, the proposed approach provides accurate, spatially explicit rice yield maps that capture within-field variability and support both end-of-season yield estimation and early season forecasting, enabling the identification of potentially low-yield zones to support targeted management decisions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 8140 KB  
Article
Comparative Assessment of Hyperspectral and Multispectral Vegetation Indices for Estimating Fire Severity in Mediterranean Ecosystems
by José Alberto Cipra-Rodriguez, José Manuel Fernández-Guisuraga and Carmen Quintano
Remote Sens. 2026, 18(2), 244; https://doi.org/10.3390/rs18020244 - 12 Jan 2026
Viewed by 198
Abstract
Assessing post-fire disturbance in Mediterranean ecosystems is essential for quantifying ecological impacts and guiding restoration strategies. This study evaluates fire severity following an extreme wildfire event (~28,000 ha) in northwestern Spain using vegetation indices (VIs) derived from PRISMA hyperspectral imagery, validated against field-based [...] Read more.
Assessing post-fire disturbance in Mediterranean ecosystems is essential for quantifying ecological impacts and guiding restoration strategies. This study evaluates fire severity following an extreme wildfire event (~28,000 ha) in northwestern Spain using vegetation indices (VIs) derived from PRISMA hyperspectral imagery, validated against field-based Composite Burn Index (CBI) measurements at the vegetation, soil, and site levels across three vegetation formations (coniferous forests, broadleaf forests, and shrublands). Hyperspectral VIs were benchmarked against multispectral VIs derived from Sentinel-2. Hyperspectral VIs yielded stronger correlations with CBI values than multispectral VIs. Vegetation-level CBI showed the highest correlations, reflecting the sensitivity of most VIs to canopy structural and compositional changes. Indices incorporating red-edge, near-infrared (NIR), and shortwave infrared (SWIR) bands demonstrated the greatest explanatory power. Among hyperspectral indices, DVIRED, EVI, and especially CAI performed best. For multispectral data, NDRE, CIREDGE, ENDVI, and GNDVI were the most effective. These findings highlight the strong potential of hyperspectral remote sensing for accurate, scalable post-fire severity assessment in heterogeneous Mediterranean ecosystems. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

24 pages, 10860 KB  
Article
Performance Evaluation of Deep Learning Models for Forest Extraction in Xinjiang Using Different Band Combinations of Sentinel-2 Imagery
by Hang Zhou, Kaiyue Luo, Lingzhi Dang, Fei Zhang and Xu Ma
Forests 2026, 17(1), 88; https://doi.org/10.3390/f17010088 - 9 Jan 2026
Viewed by 165
Abstract
Remote sensing provides an efficient approach for monitoring ecosystem dynamics in the arid and semi-arid regions of Xinjiang, yet traditional forest-land extraction methods (e.g., spectral indices, threshold segmentation) show limited adaptability in complex environments affected by terrain shadows, cloud contamination, and spectral confusion [...] Read more.
Remote sensing provides an efficient approach for monitoring ecosystem dynamics in the arid and semi-arid regions of Xinjiang, yet traditional forest-land extraction methods (e.g., spectral indices, threshold segmentation) show limited adaptability in complex environments affected by terrain shadows, cloud contamination, and spectral confusion with grassland or cropland. To overcome these limitations, this study used three convolutional neural network-based models (FCN, DeepLabV3+, and PSPNet) for accurate forest-land extraction. Four tri-band training datasets were constructed from Sentinel-2 imagery using combinations of visible, red-edge, near-infrared, and shortwave infrared bands. Results show that the FCN model trained with B4–B8–B12 achieves the best performance, with an mIoU of 89.45% and an mFscore of 94.23%. To further assess generalisation in arid landscapes, ESA WorldCover and Dynamic World products were introduced as benchmarks. Comparative analyses of spatial patterns and quantitative metrics demonstrate that the FCN model exhibits robustness and scalability across large areas, confirming its effectiveness for forest-land extraction in arid regions. This study innovatively combines band combination optimization strategies with multiple deep learning models, offering a novel approach to resolving spectral confusion between forest areas and similar vegetation types in heterogeneous arid ecosystems. Its practical significance lies in providing a robust data foundation and methodological support for forest monitoring, ecological restoration, and sustainable land management in Xinjiang and similar regions. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

24 pages, 5947 KB  
Article
Integration of UAV Multispectral and Meteorological Data to Improve Maize Yield Prediction Accuracy
by Yuqiao Yan, Yaoyu Li, Shujie Jia, Yangfan Bai, Boxin Cao, Abdul Sattar Mashori, Fuzhong Li and Wuping Zhang
Agronomy 2026, 16(2), 163; https://doi.org/10.3390/agronomy16020163 - 8 Jan 2026
Viewed by 338
Abstract
This study, conducted in the Lifang Dryland Experimental Area in Jinzhong, Shanxi Province, China, aimed to develop a method to accurately predict maize yield by combining UAV multispectral data with meteorological information. A DJI Mavic 3M UAV was used to capture four-band imagery [...] Read more.
This study, conducted in the Lifang Dryland Experimental Area in Jinzhong, Shanxi Province, China, aimed to develop a method to accurately predict maize yield by combining UAV multispectral data with meteorological information. A DJI Mavic 3M UAV was used to capture four-band imagery (red, green, red-edge, and near-infrared), from which 16 vegetation indices were calculated, along with daily meteorological data. Among eight machine learning algorithms tested, ensemble models, Random Forest and Gradient Boosting Trees performed best, with R2 values of 0.8696 and 0.8163, respectively. SHAP analysis identified MSR and RVI as the most important features. The prediction accuracy varied across growth stages, with the jointing stage showing the highest performance (R2 = 0.7161), followed by the flowering stage (R2 = 0.6588). The yield exhibited a strip-like spatial distribution, ranging from 6450 to 9600 kg·ha−1, influenced by field management, soil characteristics, and microtopography. K-means clustering revealed high-yield areas in the central-northern region and low-yield areas in the south, supported by a global Moran’s I index of 0.4290, indicating moderate positive spatial autocorrelation. This study demonstrates that integrating UAV multispectral data, meteorological information, and machine learning can achieve accurate yield prediction (with a relative RMSE of about 2.8%) and provides a quantitative analytical framework for spatial management in drought-prone areas. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

33 pages, 4122 KB  
Article
Empirical Evaluation of UNet for Segmentation of Applicable Surfaces for Seismic Sensor Installation
by Mikhail Uzdiaev, Marina Astapova, Andrey Ronzhin and Aleksandra Figurek
J. Imaging 2026, 12(1), 34; https://doi.org/10.3390/jimaging12010034 - 8 Jan 2026
Viewed by 267
Abstract
The deployment of wireless seismic nodal systems necessitates the efficient identification of optimal locations for sensor installation, considering factors such as ground stability and the absence of interference. Semantic segmentation of satellite imagery has advanced significantly, and its application to this specific task [...] Read more.
The deployment of wireless seismic nodal systems necessitates the efficient identification of optimal locations for sensor installation, considering factors such as ground stability and the absence of interference. Semantic segmentation of satellite imagery has advanced significantly, and its application to this specific task remains unexplored. This work presents a baseline empirical evaluation of the U-Net architecture for the semantic segmentation of surfaces applicable for seismic sensor installation. We utilize a novel dataset of Sentinel-2 multispectral images, specifically labeled for this purpose. The study investigates the impact of pretrained encoders (EfficientNetB2, Cross-Stage Partial Darknet53—CSPDarknet53, and Multi-Axis Vision Transformer—MAxViT), different combinations of Sentinel-2 spectral bands (Red, Green, Blue (RGB), RGB+Near Infrared (NIR), 10-bands with 10 and 20 m/pix spatial resolution, full 13-band), and a technique for improving small object segmentation by modifying the input convolutional layer stride. Experimental results demonstrate that the CSPDarknet53 encoder generally outperforms the others (IoU = 0.534, Precision = 0.716, Recall = 0.635). The combination of RGB and Near-Infrared bands (10 m/pixel resolution) yielded the most robust performance across most configurations. Reducing the input stride from 2 to 1 proved beneficial for segmenting small linear objects like roads. The findings establish a baseline for this novel task and provide practical insights for optimizing deep learning models in the context of automated seismic nodal network installation planning. Full article
(This article belongs to the Special Issue Image Segmentation: Trends and Challenges)
Show Figures

Figure 1

24 pages, 22005 KB  
Article
Soil Organic Matter Prediction by Fusing Supervised-Derived VisNIR Variables with Multispectral Remote Sensing
by Lintao Lv, Changkun Wang, Ziran Yuan, Xiaopan Wang, Liping Liu, Jie Liu, Mengsi Jia, Yuguo Zhao and Xianzhang Pan
Remote Sens. 2026, 18(1), 121; https://doi.org/10.3390/rs18010121 - 29 Dec 2025
Viewed by 286
Abstract
Accurate mapping of soil organic matter (SOM) is essential for soil management. Remote sensing (RS) provides broad spatial coverage, while visible and near-infrared (VisNIR) laboratory spectroscopy enables accurate point-scale SOM prediction. Conventional data methods for fusing RS and VisNIR data often rely on [...] Read more.
Accurate mapping of soil organic matter (SOM) is essential for soil management. Remote sensing (RS) provides broad spatial coverage, while visible and near-infrared (VisNIR) laboratory spectroscopy enables accurate point-scale SOM prediction. Conventional data methods for fusing RS and VisNIR data often rely on principal components (PCs) extracted from VisNIR data that have an indirect relationship to SOM and employ ordinary kriging (OK) for their spatialization, resulting in limited accuracy. This study introduces an enhanced fusion method using partial least squares regression (PLSR) to extract supervised latent variables (LVs) related to SOM and residual kriging (RK) for spatialization. Two fusion strategies (four variants)—RS + first i PCs/LVs and RS + ith PC/LV—were evaluated in the contrasting agricultural regions of Da’an City (n = 100) and Fengqiu County (n = 117), China. Laboratory-measured soil spectra (400–2400 nm) were integrated with many temporal combinations of Landsat 8 imagery. The results demonstrate that LVs exhibit stronger correlations with SOM than PCs. For example, in Da’an, LV6 (r = 0.36) substantially outperformed PC6 (r = 0.02), while in Fengqiu, LV3 (r = 0.40) outperformed PC3 (r = −0.05). RK also dramatically improved their spatialization over OK, as demonstrated in Da’an where the R2 for LV2 increased from 0.21 to 0.50. More importantly, in SOM prediction performance, all four fusion variants improved accuracy over RS alone, and the LV-based fusion achieved superior results. In terms of mean performance, RS + first i LVs achieved the highest R2 (0.39), lowest RMSE (5.76 g/kg), and minimal variability (SD of R2 = 0.06; SD of RMSE = 0.28 g/kg) in Da’an, outperforming the PC-based fusion (R2 = 0.37, SD = 0.09; RMSE = 5.85 g/kg, SD = 0.42 g/kg). In Fengqiu, two fusion strategies demonstrated comparable performance. Regarding peak performance, the PC-based fusion in Da’an achieved a maximum R2 of 0.57 (RMSE = 4.82 g/kg), while the LV-based fusion delivered comparable results (R2 = 0.55, RMSE = 4.94 g/kg); both surpassed the RS-only method (R2 = 0.54 and RMSE = 4.98 g/kg). In Fengqiu, however, the LV-based fusion demonstrated superiority, reaching the highest R2 of 0.40, compared to 0.38 for the PC-based fusion and 0.35 for RS alone. Furthermore, across different temporal scenarios, the LV-based fusion also exhibited greater stability, particularly in Da’an, where the RS + first i LVs method yielded the lowest standard deviation in R2 (0.06 vs. 0.09 for PC-based fusion). In summary, integrating LV-derived variables with RS data enhances the accuracy and temporal stability of SOM predictions, making it a preferable approach for practical SOM mapping. Full article
Show Figures

Figure 1

19 pages, 28579 KB  
Article
Fusion of Sentinel-2 and Sentinel-3 Images for Producing Daily Maps of Advected Aerosols at Urban Scale
by Luciano Alparone, Massimo Bianchini, Andrea Garzelli and Simone Lolli
Remote Sens. 2026, 18(1), 116; https://doi.org/10.3390/rs18010116 - 29 Dec 2025
Viewed by 335
Abstract
In this study, the authors wish to introduce an unsupervised procedure designed for real-time generation of maps depicting advected aerosols, specifically focusing on desert dust and smoke originating from biomass combustion. This innovative approach leverages the high-resolution capabilities provided by Sentinel-2 imagery, operating [...] Read more.
In this study, the authors wish to introduce an unsupervised procedure designed for real-time generation of maps depicting advected aerosols, specifically focusing on desert dust and smoke originating from biomass combustion. This innovative approach leverages the high-resolution capabilities provided by Sentinel-2 imagery, operating at a 10 m scale, which is particularly advantageous for urban settings. Concurrently, it takes advantage of the near-daily revisit frequency afforded by Sentinel-3. The methodology involves generating aerosol maps at a 10 m resolution using bands 2, 3, 4, and 5 of Sentinel-2, available in L1C and L2A formats, conducted every five days, contingent upon the absence of cloud cover. Subsequently, this map is enhanced every two days through spatial modulation, utilizing a similar map derived from the visible and near-infrared observations (VNIR) captured by the OLCI instrument aboard Sentinel-3, which is accessible at a 300 m scale. Data from the two satellites undergo independent processing, with integration at the feature level. This process combines Sentinel-3 and Sentinel-2 maps to update aerosol concentrations in each 300 m × 300 m grid every two days or more frequently. For the dates when Sentinel-2 data is unavailable, the spatial texture or the aerosol distribution within these grid cells is extrapolated. This spatial index represents an advancement over prior studies that focused on differentiating between dust and smoke based on their scattering and absorption characteristics. The entire process is rigorously validated by comparing it with point measurements of fine- and coarse-mode Aerosol Optical Depth (AOD) obtained from AERONET stations situated at the test sites, ensuring the reliability and accuracy of the generated maps. Full article
Show Figures

Graphical abstract

15 pages, 2523 KB  
Article
Shutter Speed Influences the Capability of a Low-Cost Multispectral Sensor to Estimate Turfgrass (Cynodon dactylon L.—Poaceae) Vegetation Vigor Under Different Solar Radiation Conditions
by Rosa M. Martínez-Meroño, Pedro F. Freire-García, Nicola Furnitto, Sebastian Lupica, Salvatore Privitera, Giuseppe Sottosanti, Maria Spagnuolo, Luciano Caruso, Emanuele Cerruto, Sabina Failla, Domenico Longo, Giuseppe Manetto, Giampaolo Schillaci and Juan Miguel Ramírez-Cuesta
Sensors 2026, 26(1), 47; https://doi.org/10.3390/s26010047 - 20 Dec 2025
Viewed by 569
Abstract
Radiometric calibration of multispectral imagery plays a critical role in the determination of vegetation-related features. This radiometric calibration strongly depends on a proper sensor configuration when acquiring images, the shutter speed being a critical parameter. The objective of the present study was to [...] Read more.
Radiometric calibration of multispectral imagery plays a critical role in the determination of vegetation-related features. This radiometric calibration strongly depends on a proper sensor configuration when acquiring images, the shutter speed being a critical parameter. The objective of the present study was to appraise the influence of shutter speed on the reflectance in the visible and near-infrared (NIR) spectral regions registered by a low-cost multispectral sensor (MAPIR Survey3) on a homogeneous field of turfgrass (Cynodon dactylon L.—Poaceae) and on the vegetation index (VI) values calculated from them, under different solar radiation conditions. For this purpose, 10 shutter speed configurations were tested in field campaigns with variable solar radiation values. The main results demonstrated that the reflectance in the green spectral region was more sensitive to shutter speed than that of the red and NIR spectral regions, particularly under high solar radiation conditions. Moreover, VIs calculated using the green band were more sensitive to slow shutter speeds, thus presenting a higher probability of providing meaningless artifact values. In conclusion, this study provides shutter speed recommendations under different illumination conditions to optimize the reflectance and the VI sensitivity within the image, which can be applied as a simple method to optimize image acquisition from unmanned aerial vehicles under varying solar radiation conditions. Full article
Show Figures

Figure 1

28 pages, 15780 KB  
Article
Towards Near-Real-Time Estimation of Live Fuel Moisture Content from Sentinel-2 for Fire Management in Northern Thailand
by Chakrit Chotamonsak, Duangnapha Lapyai and Punnathorn Thanadolmethaphorn
Fire 2025, 8(12), 475; https://doi.org/10.3390/fire8120475 - 11 Dec 2025
Viewed by 531
Abstract
Wildfires are a recurring dry-season hazard in northern Thailand, contributing to severe air pollution and trans-boundary haze. However, the region lacks the ground-based measurements necessary for monitoring Live Fuel Moisture Content (LFMC), a key variable influencing vegetation flammability. This study presents a preliminary [...] Read more.
Wildfires are a recurring dry-season hazard in northern Thailand, contributing to severe air pollution and trans-boundary haze. However, the region lacks the ground-based measurements necessary for monitoring Live Fuel Moisture Content (LFMC), a key variable influencing vegetation flammability. This study presents a preliminary framework for near-real-time (NRT) LFMC estimation using Sentinel-2 multispectral imagery. The system integrates normalized vegetation and moisture-related indices, including the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Infrared Index (NDII), and the Moisture Stress Index (MSI) with an NDVI-derived evapotranspiration fraction (ETf) within a heuristic modeling approach. The workflow includes cloud and shadow masking, weekly to biweekly compositing, and pixel-wise normalization to address the persistent cloud cover and heterogeneous land surfaces. Although currently unvalidated, the LFMC estimates capture the relative spatial and temporal variations in vegetation moisture across northern Thailand during the 2024 dry season (January–April). Evergreen forests maintained higher moisture levels, whereas deciduous forests and agricultural landscapes exhibited pronounced drying from January to March. Short-lag responses to rainfall suggest modest moisture recovery following precipitation, although the relationship is influenced by additional climatic and ecological factors not represented in the heuristic model. LFMC-derived moisture classes reflect broad seasonal dryness patterns but should not be interpreted as direct fire danger indicators. This study demonstrates the feasibility of generating regional LFMC indicators in a data-scarce tropical environment and outlines a clear pathway for future calibration and validation, including field sampling, statistical optimization, and benchmarking against global LFMC products. Until validated, the proposed NRT LFMC estimation product should be used to assess relative vegetation dryness and to support the refinement and development of future operational fire management tools, including early warnings, burn-permit regulation, and resource allocation. Full article
(This article belongs to the Section Fire Science Models, Remote Sensing, and Data)
Show Figures

Figure 1

19 pages, 3720 KB  
Article
From RGB to Synthetic NIR: Image-to-Image Translation for Pineapple Crop Monitoring Using Pix2PixHD
by Darío Doria Usta, Ricardo Hundelshaussen, Carlos Martínez López, Delio Salgado Chamorro, César López Martínez, João Felipe Coimbra Leite Costa and Marcel Arcari Bassani
Technologies 2025, 13(12), 569; https://doi.org/10.3390/technologies13120569 - 5 Dec 2025
Viewed by 547
Abstract
Near-infrared (NIR) imaging plays a crucial role in precision agriculture; however, the high cost of multispectral sensors limits its widespread adoption. In this study, we generate synthetic NIR images (2592 × 1944 pixels) of pineapple crops from standard RGB drone imagery using the [...] Read more.
Near-infrared (NIR) imaging plays a crucial role in precision agriculture; however, the high cost of multispectral sensors limits its widespread adoption. In this study, we generate synthetic NIR images (2592 × 1944 pixels) of pineapple crops from standard RGB drone imagery using the Pix2PixHD framework. The model was trained for 580 epochs, saving the first model after epoch 1 and then every 10 epochs thereafter. While models trained beyond epoch 460 achieved marginally higher metrics, they introduced visible artifacts. Model 410 was identified as the most effective, offering consistent quantitative performance while producing artifact-free results. Evaluation of Model 410 across 229 test images showed a mean SSIM of 0.6873, PSNR of 29.92, RMSE of 8.146, and PCC of 0.6565, indicating moderate to high structural similarity and reliable spectral accuracy of the synthetic NIR data. The proposed approach demonstrates that reliable NIR information can be obtained without expensive multispectral equipment, reducing costs and enhancing accessibility for farmers. By enabling advanced tasks such as vegetation segmentation and crop health monitoring, this work highlights the potential of deep learning–based image translation to support sustainable and data-driven agricultural practices. Future directions include extending the method to other crops, environmental conditions and real-time drone monitoring. Full article
(This article belongs to the Special Issue AI-Driven Optimization in Robotics and Precision Agriculture)
Show Figures

Figure 1

21 pages, 4532 KB  
Article
Satellite-Derived Spectral Index Analysis for Drought and Groundwater Monitoring in Doñana Wetlands: A Tool for Informed Conservation Strategies
by Emilio Ramírez-Juidias, Paula Romero-Beltrán and Clara-Isabel González-López
Geographies 2025, 5(4), 75; https://doi.org/10.3390/geographies5040075 - 3 Dec 2025
Viewed by 1046
Abstract
Climate change and increased human activity are causing the Doñana wetlands, an important ecological reserve in southern Europe, to lose water more quickly. This research presents the Water Inference Moisture Index (WIMI), a spectral index designed to evaluate surface water dynamics utilizing Sentinel-2 [...] Read more.
Climate change and increased human activity are causing the Doñana wetlands, an important ecological reserve in southern Europe, to lose water more quickly. This research presents the Water Inference Moisture Index (WIMI), a spectral index designed to evaluate surface water dynamics utilizing Sentinel-2 L2A imagery from 2016 to 2024. The index, carried out using a machine learning approach, uses near-infrared (B08) and red (B04) bands to find wetland water with a high level of sensitivity, even when there is a lot of vegetation. We looked at how water availability changed over time and space by combining WIMI with long-term records of precipitation and climate data. The results show that surface water is slowly disappearing across the study area, even in years with normal rainfall. This suggests that the water retention capacity is changing and the stress on groundwater is rising. The annual WIMI values were somewhat related to rainfall, but they have been becoming less and less related in recent years. Comparing this to the IPCC Sixth Assessment Report shows that the local effects of climate change are part of a larger trend toward aridification. The study shows that WIMI is a useful, low-cost, and scalable tool for monitoring wetlands and helping with climate adaptation and conservation efforts. The results call for immediate policy actions to protect groundwater resources and support the Sustainable Development Goals for climate action and water security. Full article
Show Figures

Figure 1

18 pages, 16142 KB  
Article
Unmanned Aerial Vehicles and Low-Cost Sensors for Monitoring Biophysical Parameters of Sugarcane
by Maurício Martello, Mateus Lima Silva, Carlos Augusto Alves Cardoso Silva, Rodnei Rizzo, Ana Karla da Silva Oliveira and Peterson Ricardo Fiorio
AgriEngineering 2025, 7(12), 403; https://doi.org/10.3390/agriengineering7120403 - 1 Dec 2025
Viewed by 649
Abstract
Unmanned Aerial Vehicles (UAVs) equipped with low-cost RGB and near-infrared (NIR) cameras represent efficient and scalable technology for monitoring sugarcane crops. This study evaluated the potential of UAV imagery and three-dimensional crop modeling to estimate sugarcane height and yield under different nitrogen fertilization [...] Read more.
Unmanned Aerial Vehicles (UAVs) equipped with low-cost RGB and near-infrared (NIR) cameras represent efficient and scalable technology for monitoring sugarcane crops. This study evaluated the potential of UAV imagery and three-dimensional crop modeling to estimate sugarcane height and yield under different nitrogen fertilization levels. The experiment comprised 28 plots subjected to four nitrogen rates, and images were processed using a Structure from Motion (SfM) algorithm to generate Digital Surface Models (DSMs). Crop Height Models (CHMs) were obtained by subtracting DSMs from Digital Terrain Models (DTMs). The most accurate CHM was derived from the combination of the reference DTM and the NIR-based DSM (R2 = 0.957; RMSE = 0.162 m), while the strongest correlation between height and yield was observed at 200 days after cutting (R2 = 0.725; RMSE = 4.85 t ha−1). The NIR-modified sensor, developed at a total cost of USD 61.59, demonstrated performance comparable with commercial systems that are up to two hundred times more expensive. These results demonstrate that the proposed low-cost NIR sensor provides accurate, reliable, and accessible data for three-dimensional modeling of sugarcane. Full article
(This article belongs to the Section Remote Sensing in Agriculture)
Show Figures

Figure 1

Back to TopTop