Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (370)

Search Parameters:
Keywords = multispectral correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6707 KiB  
Article
Deep Learning-Based Spectral Reconstruction Technology for Water Color Remote Sensing and Error Analysis
by Rugang Tang, Li He, Biyun Guo and Cuishuo Ye
Remote Sens. 2025, 17(16), 2860; https://doi.org/10.3390/rs17162860 - 17 Aug 2025
Viewed by 373
Abstract
Land observation multispectral satellites (e.g., Landsat-8/9 and Sentinel-2) offer high spatial resolution but have limited spectral bands for water color observation and insufficient spectral resolution. This study proposes a spectral reconstruction model based on a residual neural network (Deep Spectral Reconstruction Learning Network, [...] Read more.
Land observation multispectral satellites (e.g., Landsat-8/9 and Sentinel-2) offer high spatial resolution but have limited spectral bands for water color observation and insufficient spectral resolution. This study proposes a spectral reconstruction model based on a residual neural network (Deep Spectral Reconstruction Learning Network, DSR-Net) to provide additional spectral bands support for nearshore water observations. The model is trained on 60 million pairs of quasi-synchronous reflectance data, and achieves stable reconstruction of 15 water color channels of the surface level reflectance for water pixels (ρw) from visible to near-infrared bands, considering sensor noise and atmospheric correction errors. Validation results based on AERONET-OC data show that the root mean square error of reconstructed ρw by DSR-Net ranges from 4.09 to 5.18 × 10−3, representing a reduction of 25% to 43% compared to original atmospheric correction results. The reconstruction accuracy reaches the observation level of the Sentinel-3/OLCI water color sensor and is universally applicable to different water categories, effectively supporting nearshore water color observation tasks such as colored dissolved organic matter inversion and cyanobacteria monitoring. The errors in the multispectral reflectance-based ρw primarily arise from sensor noise and atmospheric correction errors. After DSR-Net reconstruction, approximately 59% of the uncertainty caused by sensor noise and 38% of that caused by atmospheric correction errors are reduced. In summary, the spectral reconstruction products generated by DSR-Net not only significantly enhance the water color observation capabilities of current satellite sensors but also provide critical technical support for marine environmental monitoring and the design of next-generation sensors. Full article
(This article belongs to the Special Issue Remote Sensing in Natural Resource and Water Environment II)
Show Figures

Figure 1

31 pages, 8383 KiB  
Article
Quantifying Emissivity Uncertainty in Multi-Angle Long-Wave Infrared Hyperspectral Data
by Nikolay Golosov, Guido Cervone and Mark Salvador
Remote Sens. 2025, 17(16), 2823; https://doi.org/10.3390/rs17162823 - 14 Aug 2025
Viewed by 248
Abstract
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing [...] Read more.
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing angles. The data was collected using the Blue Heron LWIR hyperspectral imaging sensor, flown on a light aircraft in a circular orbit centered on the Penn State University campus. This sensor, with 256 spectral bands (7.56–13.52 μm), captures multiple overlapping images with varying ranges and angles. We analyzed nine different natural and man-made targets across varying viewing geometries. We present a multi-angle atmospheric correction method, similar to FLAASH-IR, modified for multi-angle scenarios. Our results show that emissivity remains relatively stable at viewing zenith angles between 40 and 50° but decreases as angles exceed 50°. We found that emissivity uncertainty varies across the spectral range, with the 10.14–11.05 μm region showing the greatest stability (standard deviations typically below 0.005), while uncertainty increases significantly in regions with strong atmospheric absorption features, particularly around 12.6 μm. These results show how reliable multi-angle hyperspectral measurements are and why angle-specific atmospheric correction matters for non-nadir imaging applications Full article
Show Figures

Figure 1

23 pages, 3875 KiB  
Article
Soil Water-Soluble Ion Inversion via Hyperspectral Data Reconstruction and Multi-Scale Attention Mechanism: A Remote Sensing Case Study of Farmland Saline–Alkali Lands
by Meichen Liu, Shengwei Zhang, Jing Gao, Bo Wang, Kedi Fang, Lu Liu, Shengwei Lv and Qian Zhang
Agronomy 2025, 15(8), 1779; https://doi.org/10.3390/agronomy15081779 - 24 Jul 2025
Viewed by 694
Abstract
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral [...] Read more.
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral ground-based data are valuable in soil salinization monitoring, but the acquisition cost is high, and the coverage is small. Therefore, this study proposes a two-stage deep learning framework with multispectral remote-sensing images. First, the wavelet transform is used to enhance the Transformer and extract fine-grained spectral features to reconstruct the ground-based hyperspectral data. A comparison of ground-based hyperspectral data shows that the reconstructed spectra match the measured data in the 450–998 nm range, with R2 up to 0.98 and MSE = 0.31. This high similarity compensates for the low spectral resolution and weak feature expression of multispectral remote-sensing data. Subsequently, this enhanced spectral information was integrated and fed into a novel multiscale self-attentive Transformer model (MSATransformer) to invert four water-soluble ions. Compared with BPANN, MLP, and the standard Transformer model, our model remains robust across different spectra, achieving an R2 of up to 0.95 and reducing the average relative error by more than 30%. Among them, for the strongly responsive ions magnesium and sulfate, R2 reaches 0.92 and 0.95 (with RMSE of 0.13 and 0.29 g/kg, respectively). For the weakly responsive ions calcium and carbonate, R2 stays above 0.80 (RMSE is below 0.40 g/kg). The MSATransformer framework provides a low-cost and high-accuracy solution to monitor soil salinization at large scales and supports precision farmland management. Full article
(This article belongs to the Special Issue Water and Fertilizer Regulation Theory and Technology in Crops)
Show Figures

Figure 1

18 pages, 8486 KiB  
Article
An Efficient Downwelling Light Sensor Data Correction Model for UAV Multi-Spectral Image DOM Generation
by Siyao Wu, Yanan Lu, Wei Fan, Shengmao Zhang, Zuli Wu and Fei Wang
Drones 2025, 9(7), 491; https://doi.org/10.3390/drones9070491 - 11 Jul 2025
Viewed by 305
Abstract
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral [...] Read more.
The downwelling light sensor (DLS) is the industry-standard solution for generating UAV-based digital orthophoto maps (DOMs). Current mainstream DLS correction methods primarily rely on angle compensation. However, due to the temporal mismatch between the DLS sampling intervals and the exposure times of multispectral cameras, as well as external disturbances such as strong wind gusts and abrupt changes in flight attitude, DLS data often become unreliable, particularly at UAV turning points. Building upon traditional angle compensation methods, this study proposes an improved correction approach—FIM-DC (Fitting and Interpolation Model-based Data Correction)—specifically designed for data collection under clear-sky conditions and stable atmospheric illumination, with the goal of significantly enhancing the accuracy of reflectance retrieval. The method addresses three key issues: (1) field tests conducted in the Qingpu region show that FIM-DC markedly reduces the standard deviation of reflectance at tie points across multiple spectral bands and flight sessions, with the most substantial reduction from 15.07% to 0.58%; (2) it effectively mitigates inconsistencies in reflectance within image mosaics caused by anomalous DLS readings, thereby improving the uniformity of DOMs; and (3) FIM-DC accurately corrects the spectral curves of six land cover types in anomalous images, making them consistent with those from non-anomalous images. In summary, this study demonstrates that integrating FIM-DC into DLS data correction workflows for UAV-based multispectral imagery significantly enhances reflectance calculation accuracy and provides a robust solution for improving image quality under stable illumination conditions. Full article
Show Figures

Figure 1

27 pages, 4364 KiB  
Article
Mapping Soil Burn Severity and Crown Scorch Percentage with Sentinel-2 in Seasonally Dry Deciduous Oak and Pine Forests in Western Mexico
by Oscar Enrique Balcázar Medina, Enrique J. Jardel Peláez, Daniel José Vega-Nieva, Adrián Israel Silva-Cardoza and Ramón Cuevas Guzmán
Remote Sens. 2025, 17(13), 2307; https://doi.org/10.3390/rs17132307 - 5 Jul 2025
Viewed by 1769
Abstract
There is a need to evaluate Sentinel-2 (S2) fire severity spectral indices (SFSIs) for predicting vegetation and soil burn severity for a variety of ecosystems. We evaluated the performance of 26 SFSIs across three fires in seasonally dry oak–pine forests in central-western Mexico. [...] Read more.
There is a need to evaluate Sentinel-2 (S2) fire severity spectral indices (SFSIs) for predicting vegetation and soil burn severity for a variety of ecosystems. We evaluated the performance of 26 SFSIs across three fires in seasonally dry oak–pine forests in central-western Mexico. The SFSIs were derived from composites of S2 multispectral images obtained with Google Earth Engine (GEE), processed using different techniques, for periods of 30, 60 and 90 days. Field verification was conducted through stratified random sampling by severity class on 100 circular plots of 707 m2, where immediate post-fire effects were evaluated for five strata, including the canopy scorch in overstory (OCS)—divided in canopy (CCS) and subcanopy (SCS)—understory (UCS) and soil burn severity (SBS). Best fits were obtained with relative, phenologically corrected indices of 60–90 days. For canopy scorch percentage prediction, the indices RBR3c and RBR5n, using NIR (bands 8 and 8a) and SWIR (band 12), provided the best accuracy (R2 = 0.82). SBS could be best mapped from RBR1c (using 11 and 12 bands) with relatively acceptable precision (R2 = 0.62). Our results support the feasibility to separately map OCS and SBS from S2, in relatively open oak–pine seasonally dry forests, potentially supporting post-fire management planning. Full article
Show Figures

Figure 1

15 pages, 1887 KiB  
Article
Multispectral Reconstruction in Open Environments Based on Image Color Correction
by Jinxing Liang, Xin Hu, Yifan Li and Kaida Xiao
Electronics 2025, 14(13), 2632; https://doi.org/10.3390/electronics14132632 - 29 Jun 2025
Viewed by 264
Abstract
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open [...] Read more.
Spectral reconstruction based on digital imaging has become an important way to obtain spectral images with high spatial resolution. The current research has made great strides in the laboratory; however, dealing with rapidly changing light sources, illumination, and imaging parameters in an open environment presents significant challenges for spectral reconstruction. This is because a spectral reconstruction model established under one set of imaging conditions is not suitable for use under different imaging conditions. In this study, considering the principle of multispectral reconstruction, we proposed a method of multispectral reconstruction in open environments based on image color correction. In the proposed method, a whiteboard is used as a medium to calculate the color correction matrices from an open environment and transfer them to the laboratory. After the digital image is corrected, its multispectral image can be reconstructed using the pre-established multispectral reconstruction model in the laboratory. The proposed method was tested in simulations and practical experiments using different datasets and illuminations. The results show that the root-mean-square error of the color chart is below 2.6% in the simulation experiment and below 6.0% in the practical experiment, which illustrates the efficiency of the proposed method. Full article
(This article belongs to the Special Issue Image Fusion and Image Processing)
Show Figures

Figure 1

23 pages, 14051 KiB  
Article
A Novel Method for Water Surface Debris Detection Based on YOLOV8 with Polarization Interference Suppression
by Yi Chen, Honghui Lin, Lin Xiao, Maolin Zhang and Pingjun Zhang
Photonics 2025, 12(6), 620; https://doi.org/10.3390/photonics12060620 - 18 Jun 2025
Viewed by 421
Abstract
Aquatic floating debris detection is a key technological foundation for ecological monitoring and integrated water environment management. It holds substantial scientific and practical value in applications such as pollution source tracing, floating debris control, and maritime navigation safety. However, this field faces ongoing [...] Read more.
Aquatic floating debris detection is a key technological foundation for ecological monitoring and integrated water environment management. It holds substantial scientific and practical value in applications such as pollution source tracing, floating debris control, and maritime navigation safety. However, this field faces ongoing challenges due to water surface polarization. Reflections of polarized light produce intense glare, resulting in localized overexposure, detail loss, and geometric distortion in captured images. These optical artifacts severely impair the performance of conventional detection algorithms, increasing both false positives and missed detections. To overcome these imaging challenges in complex aquatic environments, we propose a novel YOLOv8-based detection framework with integrated polarized light suppression mechanisms. The framework consists of four key components: a fisheye distortion correction module, a polarization feature processing layer, a customized residual network with Squeeze-and-Excitation (SE) attention, and a cascaded pipeline for super-resolution reconstruction and deblurring. Additionally, we developed the PSF-IMG dataset (Polarized Surface Floats), which includes common floating debris types such as plastic bottles, bags, and foam boards. Extensive experiments demonstrate the network’s robustness in suppressing polarization artifacts and enhancing feature stability under dynamic optical conditions. Full article
(This article belongs to the Special Issue Advancements in Optical Measurement Techniques and Applications)
Show Figures

Figure 1

20 pages, 4858 KiB  
Article
Sensitive Multispectral Variable Screening Method and Yield Prediction Models for Sugarcane Based on Gray Relational Analysis and Correlation Analysis
by Shimin Zhang, Huojuan Qin, Xiuhua Li, Muqing Zhang, Wei Yao, Xuegang Lyu and Hongtao Jiang
Remote Sens. 2025, 17(12), 2055; https://doi.org/10.3390/rs17122055 - 14 Jun 2025
Viewed by 524
Abstract
Sugarcane yield prediction plays a pivotal role in enabling farmers to monitor crop development and optimize cultivation practices, guiding harvesting operations for sugar mills. In this study, we established three experimental fields, which were planted with three main sugarcane cultivars in Guangxi, China, [...] Read more.
Sugarcane yield prediction plays a pivotal role in enabling farmers to monitor crop development and optimize cultivation practices, guiding harvesting operations for sugar mills. In this study, we established three experimental fields, which were planted with three main sugarcane cultivars in Guangxi, China, respectively, implementing a multi-gradient fertilization design with 39 plots and 810 sampling grids. Multispectral imagery was acquired by unmanned aerial vehicles (UAVs) during five critical growth stages: mid-tillering (T1), late-tillering (T2), mid-elongation (T3), late-elongation (T4), and maturation (T5). Following rigorous image preprocessing (including stitching, geometric correction, and radiometric correction), 16 VIs were extracted. To identify yield-sensitive vegetation indices (VIs), a spectral feature selection criterion combining gray relational analysis and correlation analysis (GRD-r) was proposed. Subsequently, three supervised learning algorithms—Gradient Boosting Decision Tree (GBDT), Random Forest (RF), and Support Vector Machine (SVM)—were employed to develop both single-stage and multi-stage yield prediction models. Results demonstrated that multi-stage models consistently outperformed their single-stage counterparts. Among the single-stage models, the RF model using T3-stage features achieved the highest accuracy (R2 = 0.78, RMSEV = 7.47 t/hm2). The best performance among multi-stage models was obtained using a GBDT model constructed from a combination of DVI (T1), NDVI (T2), TDVI (T3), NDVI (T4), and SRPI (T5), yielding R2 = 0.83 and RMSEV = 6.63 t/hm2. This study highlights the advantages of integrating multi-temporal spectral features and advanced machine learning techniques for improving sugarcane yield prediction, providing a theoretical foundation and practical guidance for precision agriculture and harvest logistics. Full article
(This article belongs to the Special Issue Proximal and Remote Sensing for Precision Crop Management II)
Show Figures

Figure 1

21 pages, 23619 KiB  
Article
Optimizing Data Consistency in UAV Multispectral Imaging for Radiometric Correction and Sensor Conversion Models
by Weiguang Yang, Huaiyuan Fu, Weicheng Xu, Jinhao Wu, Shiyuan Liu, Xi Li, Jiangtao Tan, Yubin Lan and Lei Zhang
Remote Sens. 2025, 17(12), 2001; https://doi.org/10.3390/rs17122001 - 10 Jun 2025
Viewed by 482
Abstract
Recent advancements in precision agriculture have been significantly bolstered by the Uncrewed Aerial Vehicles (UAVs) equipped with multispectral sensors. These systems are pivotal in transforming sensor-recorded Digital Number (DN) values into universal reflectance, crucial for ensuring data consistency irrespective of collection time, region, [...] Read more.
Recent advancements in precision agriculture have been significantly bolstered by the Uncrewed Aerial Vehicles (UAVs) equipped with multispectral sensors. These systems are pivotal in transforming sensor-recorded Digital Number (DN) values into universal reflectance, crucial for ensuring data consistency irrespective of collection time, region, and illumination. This study, conducted across three regions in China using Sequoia and Phantom 4 Multispectral cameras, focused on examining the effects of radiometric correction on data consistency and accuracy, and developing a conversion model for data from these two sensors. Our findings revealed that radiometric correction substantially enhances data consistency in vegetated areas for both sensors, though its impact on non-vegetated areas is limited. Recalibrating reflectance for calibration plates significantly improved the consistency of band values and the accuracy of vegetation index calculations for both cameras. Decision tree and random forest models emerged as more effective for data conversion between the sensors, achieving R2 values up to 0.91. Additionally, the P4M generally outperformed the Sequoia in accuracy, particularly with standard reflectance calibration. These insights emphasize the critical role of radiometric correction in UAV remote sensing for precision agriculture, underscoring the complexities of sensor data consistency and the potential for generalization of models across multi-sensor platforms. Full article
Show Figures

Figure 1

20 pages, 5153 KiB  
Article
A Practical Method for Red-Edge Band Reconstruction for Landsat Image by Synergizing Sentinel-2 Data with Machine Learning Regression Algorithms
by Yuan Zhang, Zhekui Fan, Wenjia Yan, Chentian Ge and Huasheng Sun
Sensors 2025, 25(11), 3570; https://doi.org/10.3390/s25113570 - 5 Jun 2025
Viewed by 801
Abstract
Red-edge bands are the most essential spectral data for multispectral remote sensing images, with them playing a critical role in monitoring vegetation growth status at regional and global scales. However, the absence of red-edge bands limits the applicability of Landsat images, the most [...] Read more.
Red-edge bands are the most essential spectral data for multispectral remote sensing images, with them playing a critical role in monitoring vegetation growth status at regional and global scales. However, the absence of red-edge bands limits the applicability of Landsat images, the most widely used remote sensing data, to vegetation monitoring. This study proposes an innovative method to reconstruct Landsat’s red-edge bands. The consistency in corresponding bands of Landsat OLI and Sentinel-2 MSI was first investigated using different resampling approaches and atmospheric correction algorithms. Three machine learning algorithms (ridge regression, gradient boosted regression tree (GBRT), and random forest regression) were then employed to build the red-edge reconstruction model for different vegetation types. With the optimal model, three red-edge bands of Landsat OLI were subsequently obtained in alignment with their derived vegetation indices. Our results showed that bilinear interpolation resampling, in combination with the LaSRC atmospheric correction algorithm, achieved high consistency between the matching bands of OLI and MSI (R2 > 0.88). With the GBRT algorithm, three simulated OLI red-edge bands were highly consistent with those of MSI, with an R2 > 0.96 and an RMSE < 0.0122. The derived Landsat red-edge indices coincide with those of Sentinel-2, with an R2 of 0.78 to 0.95 and an rRMSE of 3.37% to 21.64%. This study illustrates that the proposed red-edge reconstruction method can extend the spectral domain of Landsat OLI and enhance its applicability in global vegetation remote sensing. Meanwhile, it provides potential insight into historical Landsat TM/ETM+ data enhancement for improving time-series vegetation monitoring. Full article
(This article belongs to the Special Issue Machine Learning in Image/Video Processing and Sensing)
Show Figures

Figure 1

25 pages, 5871 KiB  
Article
Estimating Wheat Traits Using Artificial Neural Network-Based Radiative Transfer Model Inversion
by Lukas J. Koppensteiner, Hans-Peter Kaul, Sebastian Raubitzek, Philipp Weihs, Pia Euteneuer, Jaroslav Bernas, Gerhard Moitzi, Thomas Neubauer, Agnieszka Klimek-Kopyra, Norbert Barta and Reinhard W. Neugschwandtner
Remote Sens. 2025, 17(11), 1904; https://doi.org/10.3390/rs17111904 - 30 May 2025
Viewed by 458
Abstract
Estimating wheat traits based on spectral reflectance measurements and machine learning remains challenging due to the large datasets required for model training and testing. To overcome this limitation, a simulated dataset was generated using the radiative transfer model (RTM) PROSAIL and inverted based [...] Read more.
Estimating wheat traits based on spectral reflectance measurements and machine learning remains challenging due to the large datasets required for model training and testing. To overcome this limitation, a simulated dataset was generated using the radiative transfer model (RTM) PROSAIL and inverted based on an artificial neural network (ANN). Field experiments were conducted in Eastern Austria to measure spectral reflectance and destructively sample plants to measure the wheat traits plant area index (PAI), nitrogen yield (NY), canopy water content (CWC), and above-ground dry matter (AGDM). Four ANN-based RTM inversion models were setup, which varied in their spectral resolution, hyperspectral or multispectral, and the inclusion or exclusion of background soil spectra correction. The models were also compared to a simple vegetation index approach using Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red-Edge (NDRE). The RTM inversion model with hyperspectral input data and background soil spectra correction was the best among all tested models for estimating wheat traits during the vegetative developmental stages (PAI: R2 = 0.930, RRMSE = 17.9%; NY: R2 = 0.908, RRMSE = 14.4%; CWC: R2 = 0.967, RRMSE = 17.0%) as well as throughout the whole growing season (PAI: R2 = 0.845, RRMSE = 27.7%; CWC: R2 = 0.884, RRMSE = 20.0%; AGDM: R2 = 0.960, RRMSE = 13.7%). Many models presented in this study provided suitable estimations of the relevant wheat traits PAI, NY, CWC, and AGDM for application in agronomy, breeding, and crop sciences in general. Full article
Show Figures

Figure 1

15 pages, 2957 KiB  
Article
Four-Wavelength Thermal Imaging for High-Energy-Density Industrial Processes
by Alexey Bykov, Anastasia Zolotukhina, Mikhail Poliakov, Andrey Belykh, Roman Asyutin, Anastasiia Korneeva, Vladislav Batshev and Demid Khokhlov
J. Imaging 2025, 11(6), 176; https://doi.org/10.3390/jimaging11060176 - 27 May 2025
Viewed by 866
Abstract
Multispectral imaging technology holds significant promise in the field of thermal imaging applications, primarily due to its unique ability to provide comprehensive two-dimensional spectral data distributions without the need for any form of scanning. This paper focuses on the development of an accessible [...] Read more.
Multispectral imaging technology holds significant promise in the field of thermal imaging applications, primarily due to its unique ability to provide comprehensive two-dimensional spectral data distributions without the need for any form of scanning. This paper focuses on the development of an accessible basic design concept and a method for estimating temperature maps using a four-channel spectral imaging system. The research examines key design considerations and establishes a workflow for data correction and processing. It involves preliminary camera calibration procedures, which are essential for accurately assessing and compensating for the characteristic properties of optical elements and image sensors. The developed method is validated through testing using a blackbody source, demonstrating a mean relative temperature error of 1%. Practical application of the method is demonstrated through temperature mapping of a tungsten lamp filament. Experiments demonstrated the capability of the developed multispectral camera to detect and visualize non-uniform temperature distributions and localized temperature deviations with sufficient spatial resolution. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

18 pages, 3103 KiB  
Article
Multi-Source Remote Sensing-Based High-Accuracy Mapping of Helan Mountain Forests from 2015 to 2022
by Wenjing Cui, Yang Hu and Yun Wu
Forests 2025, 16(5), 866; https://doi.org/10.3390/f16050866 - 21 May 2025
Cited by 1 | Viewed by 473
Abstract
This study develops an optimized approach for small-scale forest area extraction in mountainous regions by integrating Landsat multispectral and ALOS PALSAR-2 radar data through threshold-based classification methods. The threshold fusion method proposed in this study achieves innovations in three key aspects: First, by [...] Read more.
This study develops an optimized approach for small-scale forest area extraction in mountainous regions by integrating Landsat multispectral and ALOS PALSAR-2 radar data through threshold-based classification methods. The threshold fusion method proposed in this study achieves innovations in three key aspects: First, by integrating Landsat NDVI with PALSAR-2 polarization characteristics, it effectively addresses omission errors caused by cloud interference and terrain shadows. Second, the adoption of a decision-level (rather than feature-level) fusion strategy significantly reduces computational complexity. Finally, the incorporation of terrain correction (slope > 20° and aspect 60–120°) enhances classification accuracy, providing a reliable technical solution for small-scale forest monitoring. The results indicate that (1) the combination of Landsat multispectral remote sensing data and PALSAR-2 radar remote sensing data achieved the highest classification accuracy, with an overall forest classification accuracy of 97.62% in 2015 and 96.97% in 2022. The overall classification accuracy of Landsat multispectral remote sensing data alone was 93%, and that of PALSAR radar data alone was 85%, which is significantly lower than the results obtained using the combined data for forest classification. (2) Between 2015 and 2023, the forest area of Helan Mountain experienced certain fluctuations, primarily influenced by ecological and natural factors as well as variations in the accuracy of remote sensing data. In conclusion, the method proposed in this study enables more precise estimation of the forest area in the Helan Mountain region of Ningxia. This not only meets the management needs for forest resources in Helan Mountain but also provides valuable reference for forest area extraction in mountainous regions of Northwest China. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

30 pages, 10008 KiB  
Article
Integrating Stride Attention and Cross-Modality Fusion for UAV-Based Detection of Drought, Pest, and Disease Stress in Croplands
by Yan Li, Yaze Wu, Wuxiong Wang, Huiyu Jin, Xiaohan Wu, Jinyuan Liu, Chen Hu and Chunli Lv
Agronomy 2025, 15(5), 1199; https://doi.org/10.3390/agronomy15051199 - 15 May 2025
Viewed by 751
Abstract
Timely and accurate detection of agricultural disasters is crucial for ensuring food security and enhancing post-disaster response efficiency. This paper proposes a deployable UAV-based multimodal agricultural disaster detection framework that integrates multispectral and RGB imagery to simultaneously capture the spectral responses and spatial [...] Read more.
Timely and accurate detection of agricultural disasters is crucial for ensuring food security and enhancing post-disaster response efficiency. This paper proposes a deployable UAV-based multimodal agricultural disaster detection framework that integrates multispectral and RGB imagery to simultaneously capture the spectral responses and spatial structural features of affected crop regions. To this end, we design an innovative stride–cross-attention mechanism, in which stride attention is utilized for efficient spatial feature extraction, while cross-attention facilitates semantic fusion between heterogeneous modalities. The experimental data were collected from representative wheat and maize fields in Inner Mongolia, using UAVs equipped with synchronized multispectral (red, green, blue, red edge, near-infrared) and high-resolution RGB sensors. Through a combination of image preprocessing, geometric correction, and various augmentation strategies (e.g., MixUp, CutMix, GridMask, RandAugment), the quality and diversity of the training samples were significantly enhanced. The model trained on the constructed dataset achieved an accuracy of 93.2%, an F1 score of 92.7%, a precision of 93.5%, and a recall of 92.4%, substantially outperforming mainstream models such as ResNet50, EfficientNet-B0, and ViT across multiple evaluation metrics. Ablation studies further validated the critical role of the stride attention and cross-attention modules in performance improvement. This study demonstrates that the integration of lightweight attention mechanisms with multimodal UAV remote sensing imagery enables efficient, accurate, and scalable agricultural disaster detection under complex field conditions. Full article
(This article belongs to the Special Issue New Trends in Agricultural UAV Application—2nd Edition)
Show Figures

Figure 1

22 pages, 9592 KiB  
Article
Discovery of Large Methane Emissions Using a Complementary Method Based on Multispectral and Hyperspectral Data
by Xiaoli Cai, Yunfei Bao, Qiaolin Huang, Zhong Li, Zhilong Yan and Bicen Li
Atmosphere 2025, 16(5), 532; https://doi.org/10.3390/atmos16050532 - 30 Apr 2025
Viewed by 729
Abstract
As global atmospheric methane concentrations surge at an unprecedented rate, the identification of methane super-emitters with significant mitigation potential has become imperative. In this study, we utilize remote sensing satellite data with varying spatiotemporal coverage and resolutions to detect and quantify methane emissions. [...] Read more.
As global atmospheric methane concentrations surge at an unprecedented rate, the identification of methane super-emitters with significant mitigation potential has become imperative. In this study, we utilize remote sensing satellite data with varying spatiotemporal coverage and resolutions to detect and quantify methane emissions. We exploit the synergistic potential of Sentinel-2, EnMAP, and GF5-02-AHSI for methane plume detection. Employing a matched filtering algorithm based on EnMAP and AHSI, we detect and extract methane plumes within emission hotspots in China and the United States, and estimate the emission flux rates of individual methane point sources using the IME model. We present methane plumes from industries such as oil and gas (O&G) and coal mining, with emission rates ranging from 1 to 40 tons per h, as observed by EnMAP and GF5-02-AHSI. For selected methane emission hotspots in China and the United States, we conduct long-term monitoring and analysis using Sentinel-2. Our findings reveal that the synergy between Sentinel-2, EnMAP, and GF5-02-AHSI enables the precise identification of methane plumes, as well as the quantification and monitoring of their corresponding sources. This methodology is readily applicable to other satellite instruments with coarse SWIR spectral bands, such as Landsat-7 and Landsat-8. The high-frequency satellite-based detection of anomalous methane point sources can facilitate timely corrective actions, contributing to the reduction in global methane emissions. This study underscores the potential of spaceborne multispectral imaging instruments, combining fine pixel resolution with rapid revisit rates, to advance the global high-frequency monitoring of large methane point sources. Full article
(This article belongs to the Special Issue Study of Air Pollution Based on Remote Sensing (2nd Edition))
Show Figures

Figure 1

Back to TopTop