Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (127)

Search Parameters:
Keywords = multispectral image matching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 6237 KB  
Article
Development of a Multi-Scale Spectrum Phenotyping Framework for High-Throughput Screening of Salt-Tolerant Rice Varieties
by Xiaorui Li, Jiahao Han, Dongdong Han, Shibo Fang, Zhanhao Zhang, Li Yang, Chunyan Zhou, Chengming Jin and Xuejian Zhang
Agronomy 2026, 16(6), 658; https://doi.org/10.3390/agronomy16060658 - 20 Mar 2026
Viewed by 395
Abstract
Soil salinization severely threatens agricultural sustainability in saline–alkali regions, and high-throughput, efficient screening of salt-tolerant rice varieties is critical to mitigating this threat. Traditional evaluation methods are constrained by low throughput, limited spatiotemporal resolution, and the lack of standardized indicators. To address these [...] Read more.
Soil salinization severely threatens agricultural sustainability in saline–alkali regions, and high-throughput, efficient screening of salt-tolerant rice varieties is critical to mitigating this threat. Traditional evaluation methods are constrained by low throughput, limited spatiotemporal resolution, and the lack of standardized indicators. To address these gaps, this study established a multi-scale spectral phenotyping framework integrating ground-based hyperspectral, UAV-borne multispectral, and Sentinel-2 satellite remote sensing data for high-throughput screening of salt-tolerant rice. Field experiments were conducted with 12 rice lines at five key growth stages in Ningxia, China, with synchronous ground spectral measurements and UAV image acquisition on the same day for each stage. Five feature selection methods were employed to screen salt stress-sensitive hyperspectral bands, with classification accuracy validated via a Support Vector Machine (SVM) model. The results showed that: (1) rice spectral characteristics varied dynamically across growth stages, and first-order differential transformation effectively amplified subtle spectral variations in stress-sensitive regions; (2) the Minimum Redundancy–Maximum Relevance (mRMR) method outperformed other methods, achieving 100% classification accuracy at key growth stages, with sensitive bands dominated by red edge bands (58.33%); (3) the constructed Salt Stress Index (SIR) showed strong correlations with classical vegetation indices and rice yield, and could clearly distinguish salt-tolerant and salt-sensitive rice varieties, with stable performance against field environmental noise; and (4) band matching between UAV and Sentinel-2 data enabled multi-scale data fusion and regional-scale salt stress monitoring. This framework realizes the transformation from qualitative spectral description to quantitative salt tolerance evaluation, providing standardized technical support for salt-tolerant rice breeding and precision management of saline–alkali lands. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

26 pages, 8878 KB  
Article
A Spectrally Compatible Pseudo-Panchromatic Intensity Reconstruction for PCA-Based UAS RGB–Multispectral Image Fusion
by Dimitris Kaimaris
J. Imaging 2026, 12(3), 122; https://doi.org/10.3390/jimaging12030122 - 11 Mar 2026
Viewed by 313
Abstract
The paper presents a method for generating a pseudo-panchromatic (PPAN) orthophotomosaic that is spectrally compatible with the multispectral (MS) orthophotomosaic, and it targets the fusion of unmanned aircraft system (UAS) RGB–MS orthophotomosaics when no true panchromatic band is available. In typical UAS imaging [...] Read more.
The paper presents a method for generating a pseudo-panchromatic (PPAN) orthophotomosaic that is spectrally compatible with the multispectral (MS) orthophotomosaic, and it targets the fusion of unmanned aircraft system (UAS) RGB–MS orthophotomosaics when no true panchromatic band is available. In typical UAS imaging systems, RGB and multispectral sensors operate independently and exhibit different spectral responses and spatial resolutions, making the construction of a spectrally compatible substitution intensity a critical challenge for component substitution fusion. The conventional RGB-derived PPAN preserves spatial detail but is constrained by RGB–MS spectral incompatibility, expressed as reduced corresponding-band similarity. The proposed hybrid intensity (PPANE) increases the mean corresponding-band correlation from 0.842 (PPANA) to 0.928 (PPANE) and reduces the across-site mean SAM from 5.782° to 4.264°, while maintaining spatial sharpness comparable to the RGB-derived intensity. It is proposed that the PPANE orthophotomosaic be produced as a hybrid intensity (single band) image. Specifically, a multispectral-visible-derived intensity is resampled onto the RGB grid and statistically integrated with RGB spatial detail, followed by mild high-frequency enhancement to produce the final PPANE orthophotomosaic. Principal Component Analysis (PCA) fusion is applied to seven archaeological sites in Northern Greece. Spectral quality is evaluated on the MS grid using band-wise (corresponding-band) correlation and the Spectral Angle Mapper (SAM), while the spatial sharpness of the fused NIR orthophotomosaic is assessed using Tenengrad and Laplacian variance. The PPANE orthophotomosaic consistently increases correlations relative to PPANA (especially in Red Edge/NIR) and reduces the mean site-mean SAM. PPANC yields the lowest SAM but also the lowest spatial sharpness/clarity, whereas PPANE maintains spatial sharpness/clarity comparable to PPANA, supporting a balance between spectral consistency and spatial detail, as also confirmed through comparative evaluation against established component substitution fusion methods. The approach is reproducible and avoids full histogram matching; instead, it relies on explicitly defined linear standardization steps (mean–std normalization) and controlled spatial sharpening, and performs consistently across different scenes. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

15 pages, 4657 KB  
Article
Multispectral Characterization of Additively Manufactured and Dip-Coated Axicons
by Abhijeet Shrotri, Annamarija Starsaja, Suraj Joshi, Sascha Preu and Oliver Stübbe
Photonics 2026, 13(3), 264; https://doi.org/10.3390/photonics13030264 - 10 Mar 2026
Viewed by 366
Abstract
The use of additive manufacturing for rapid prototyping of near-infrared and terahertz components provides seamless and error-free production. This article discusses the additive manufacturing and post-processing of axicons and their performance evaluation using attenuation and near-field-measurements based fundamental techniques. The axicons are manufactured [...] Read more.
The use of additive manufacturing for rapid prototyping of near-infrared and terahertz components provides seamless and error-free production. This article discusses the additive manufacturing and post-processing of axicons and their performance evaluation using attenuation and near-field-measurements based fundamental techniques. The axicons are manufactured using the materials cyclic olefin copolymer (TOPAS) and polymethyl methacrylate (PMMA), for their respective use in terahertz and near-infrared applications. The optical and terahertz components manufactured using traditional 3D-printing processes, e.g., fused filament fabrication or stereolithography apparatus exhibit high surface roughness in the range of 15 ± 2.5 µm, resulting in undesired propagation and scattering in the near infrared wavelengths. This research work proposes an economical post-processing technique for additively manufactured terahertz and near-infrared axicons for applications in multispectral characterization, e.g., bio-sensing. The authors used an enhanced method of dip-coating, which involves interval dipping and intermittent hardening to achieve better surface finish. An emphasis is placed on interval dipping and intermittent hardening, which lead to excellent transparency in case of additively-manufactured near-infrared axicons. The dip-coated samples exhibit surface roughness below 10 nm. With the use of heated resin material as the coating layer, due to reduced viscosity, the resin material distributes uniformly over the surface of the 3D-printed terahertz and near-infrared axicons. The authors also observed that the DOF length deviation between unprocessed and enhanced dip-coated axicons remains within the measurement error estimation from analytical calculations. In addition to the improved surface finish and transparency, the coatings are also closely matched in refractive index to the axicon material. Such post-processed axicons pave the way for producing a wide array of systems in the fields of communication, imaging, and bio-sensing. Full article
(This article belongs to the Special Issue Optical Thin Films: From Materials to Applications)
Show Figures

Figure 1

18 pages, 9422 KB  
Article
A SAM2-Driven RGB-T Annotation Pipeline with Thermal-Guided Refinement for Semantic Segmentation in Search-and-Rescue Scenes
by Andrés Salas-Espinales, Ricardo Vázquez-Martín and Anthony Mandow
Modelling 2026, 7(2), 50; https://doi.org/10.3390/modelling7020050 - 4 Mar 2026
Viewed by 680
Abstract
High-quality RGB–thermal infrared (RGB-T) semantic segmentation datasets are crucial for search-and-rescue (SAR) applications, yet their development is hindered by the scarcity of annotated ground-truth and by the challenges of thermal-camera calibration, which typically depends on heated targets with limited geometric definition. Recent approaches [...] Read more.
High-quality RGB–thermal infrared (RGB-T) semantic segmentation datasets are crucial for search-and-rescue (SAR) applications, yet their development is hindered by the scarcity of annotated ground-truth and by the challenges of thermal-camera calibration, which typically depends on heated targets with limited geometric definition. Recent approaches focus on using semantic segmentation annotation tools and transferring RGB masks to multi-spectral data, but they do not fully address the need for robust cross-modal geometric validation, quality control, or human-in-the-loop reliability assessment in RGB-T segmentation. To fill this gap, we propose a validated cross-modal annotation pipeline that combines deep correspondence matching, geometric transformation (affine or homography) of RGB-T pairs, and quantitative alignment validation. Our RGB-T pipeline integrates a semi-automatic annotation pipeline based on the Segment Anything Model 2 (SAM2) in Label Studio, with guided human refinement, and incorporates quantitative cost and quality control via inter-annotator agreement before being used in downstream model training. Results across three annotators show that the proposed approach reduces annotation time by 36% while achieving high annotation quality (mean IoU = 74.9%) and strong inter-annotator agreement (mean pixel accuracy = 74.3%, Cohen’s κ = 65%). The proposed RGB-T pipeline was annotated on a SAR-oriented RGB-T dataset comprising 306 image pairs and trained on two SOTA RGB-T. These findings demonstrate the practical value of the proposed methodology and establish a reproducible framework for generating reliable RGB-T semantic segmentation datasets, complementing and extending recent multispectral auto-labeling approaches. Full article
(This article belongs to the Section Modelling in Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3953 KB  
Article
Age Prediction of Hematoma from Hyperspectral Images Using Convolutional Neural Networks
by Arash Keshavarz, Gerald Bieber, Daniel Wulff, Carsten Babian and Stefan Lüdtke
J. Imaging 2026, 12(2), 78; https://doi.org/10.3390/jimaging12020078 - 11 Feb 2026
Viewed by 648
Abstract
Accurate estimation of hematoma age remains a major challenge in forensic practice, as current assessments rely heavily on subjective visual interpretation. Hyperspectral imaging (HSI) captures rich spectral signatures that may reflect the biochemical evolution of hematomas over time. This study evaluates whether a [...] Read more.
Accurate estimation of hematoma age remains a major challenge in forensic practice, as current assessments rely heavily on subjective visual interpretation. Hyperspectral imaging (HSI) captures rich spectral signatures that may reflect the biochemical evolution of hematomas over time. This study evaluates whether a convolutional neural network (CNN) integrating both spectral and spatial information improves hematoma age estimation accuracy. Additionally, we investigate whether performance can be maintained using a reduced, physiologically motivated subset of wavelengths. Using a dataset of forearm hematomas from 25 participants, we applied radiometric normalization and SAM-based segmentation to extract 64×64×204 hyperspectral patches. In leave-one-subject-out cross-validation, the CNN outperformed a spectral-only Lasso baseline, reducing the mean absolute error (MAE) from 3.24 days to 2.29 days. Band-importance analysis combining SmoothGrad and occlusion sensitivity identified 20 highly informative wavelengths; using only these bands matched or exceeded the accuracy of the full 204-band model across early, middle, and late hematoma stages. These results demonstrate that spectral–spatial modeling and physiologically grounded band selection can enhance estimation accuracy while significantly reducing data dimensionality. This approach supports the development of compact multispectral systems for objective clinical and forensic evaluation. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

12 pages, 3845 KB  
Proceeding Paper
Exploring the Application of UAV-Multispectral Sensors for Proximal Imaging of Agricultural Crops
by Tarun Teja Kondraju, Rabi N. Sahoo, Selvaprakash Ramalingam, Rajan G. Rejith, Amrita Bhandari, Rajeev Ranjan and Devanakonda Venkata Sai Chakradhar Reddy
Eng. Proc. 2025, 118(1), 91; https://doi.org/10.3390/ECSA-12-26542 - 7 Nov 2025
Viewed by 682
Abstract
UAV-mounted multispectral sensors are widely used to study crop health. Utilising the same cameras to capture close-up images of crops can significantly improve crop health evaluations through multispectral technology. Unlike RGB cameras that only detect visible light, these sensors can identify additional spectral [...] Read more.
UAV-mounted multispectral sensors are widely used to study crop health. Utilising the same cameras to capture close-up images of crops can significantly improve crop health evaluations through multispectral technology. Unlike RGB cameras that only detect visible light, these sensors can identify additional spectral bands in the red-edge and near-infrared (NIR) ranges. This enables early detection of diseases, pests, and deficiencies through the calculation of various spectral indices. In this work, the ability to use UAV-multispectral sensors for close-proximity imaging of crops was studied. Images of plants were taken with a Micasense Rededge-MX from top and side views at a distance of 1 m. The camera has five sensors that independently capture blue, green, red, red-edge, and NIR light. The slight misalignment of these sensors results in a shift in the swath. This shift needs to be corrected to create a proper layer stack that could allow for further processing. This research utilised the Oriented FAST and Rotated BRIEF (ORB) method to detect features in each image. Random sample consensus (RANSAC) was used for feature matching to find similar features in the slave images compared to the master image (indicated by the green band). Utilising homography to warp the slave images ensures their perfect alignment with the master image. After alignment, the images were stacked, and the alignment accuracy was visually checked using true colour composites. The side-view images of the plants were perfectly aligned, while the top-view images showed errors, particularly in the pixels far from the centre. This study demonstrates that UAV-mounted multispectral sensors can capture images of plants effectively, provided the plant is centred in the frame and occupies a smaller area within the image. Full article
Show Figures

Figure 1

26 pages, 6622 KB  
Article
Radiometric Cross-Calibration and Performance Analysis of HJ-2A/2B 16m-MSI Using Landsat-8/9 OLI with Spectral-Angle Difference Correction
by Jian Zeng, Hang Zhao, Yongfang Su, Qiongqiong Lan, Qijin Han, Xuewen Zhang, Xinmeng Wang, Zhaopeng Xu, Zhiheng Hu, Xiaozheng Du and Bopeng Yang
Remote Sens. 2025, 17(21), 3569; https://doi.org/10.3390/rs17213569 - 28 Oct 2025
Viewed by 1320
Abstract
The Huanjing-2A/2B (HJ-2A/2B) satellites are China’s next-generation environmental monitoring satellites, equipped with four visible light wide-swath charge-coupled device (CCD) sensors. These sensors enable the acquisition of 16-m multispectral imagery (16m-MSI) with a swath width of 800 km through field-of-view stitching. However, traditional vicarious [...] Read more.
The Huanjing-2A/2B (HJ-2A/2B) satellites are China’s next-generation environmental monitoring satellites, equipped with four visible light wide-swath charge-coupled device (CCD) sensors. These sensors enable the acquisition of 16-m multispectral imagery (16m-MSI) with a swath width of 800 km through field-of-view stitching. However, traditional vicarious calibration techniques are limited by their calibration frequency, making them insufficient for continuous monitoring requirements. To address this challenge, the present study proposes a spectral-angle difference correction-based cross-calibration approach, using the Landsat 8/9 Operational Land Imager (OLI) as the reference sensor to calibrate the HJ-2A/2B CCD sensors. This method improves both radiometric accuracy and temporal frequency. The study utilizes cloud-free image pairs of HJ-2A/2B CCD and Landsat 8/9 OLI, acquired simultaneously at the Dunhuang and Golmud calibration sites between 2021 and 2024, in combination with atmospheric parameters from the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) dataset and historical ground-measured spectral reflectance data for cross-calibration. The methodology includes spatial matching and resampling of the image pairs, along with the identification of radiometrically stable homogeneous regions. To account for sensor viewing geometry differences, an observation-angle linear correction model is introduced. Spectral band adjustment factors (SBAFs) are also applied to correct for discrepancies in spectral response functions (SRFs) across sensors. Experimental results demonstrate that the cross-calibration coefficients differ by less than 10% compared to vicarious calibration results from the China Centre for Resources Satellite Data and Application (CRESDA). Additionally, using Sentinel-2 MSI as the reference sensor, the cross-calibration coefficients were independently validated through cross-validation. The results indicate that the radiometrically corrected HJ-2A/2B 16m-MSI CCD data, based on these coefficients, exhibit improved radiometric consistency with Sentinel-2 MSI observations. Further analysis shows that the cross-calibration method significantly enhances radiometric consistency across the HJ-2A/2B 16m-MSI CCD sensors, with radiometric response differences between CCD1 and CCD4 maintained below 3%. Error analysis quantifies the impact of atmospheric parameters and surface reflectance on calibration accuracy, with total uncertainty calculated. The proposed spectral-angle correction-based cross-calibration method not only improves calibration accuracy but also offers reliable technical support for long-term radiometric performance monitoring of the HJ-2A/2B 16m-MSI CCD sensors. Full article
(This article belongs to the Special Issue Remote Sensing Satellites Calibration and Validation: 2nd Edition)
Show Figures

Graphical abstract

22 pages, 15219 KB  
Article
Integrating UAS Remote Sensing and Edge Detection for Accurate Coal Stockpile Volume Estimation
by Sandeep Dhakal, Ashish Manandhar, Ajay Shah and Sami Khanal
Remote Sens. 2025, 17(18), 3136; https://doi.org/10.3390/rs17183136 - 10 Sep 2025
Viewed by 2163
Abstract
Accurate stockpile volume estimation is essential for industries that manage bulk materials across various stages of production. Conventional ground-based methods such as walking wheels, total stations, Global Navigation Satellite Systems (GNSSs), and Terrestrial Laser Scanners (TLSs) have been widely used, but often involve [...] Read more.
Accurate stockpile volume estimation is essential for industries that manage bulk materials across various stages of production. Conventional ground-based methods such as walking wheels, total stations, Global Navigation Satellite Systems (GNSSs), and Terrestrial Laser Scanners (TLSs) have been widely used, but often involve significant safety risks, particularly when accessing hard-to-reach or hazardous areas. Unmanned Aerial Systems (UASs) provide a safer and more efficient alternative for surveying irregularly shaped stockpiles. This study evaluates UAS-based methods for estimating the volume of coal stockpiles at a storage facility near Cadiz, Ohio. Two sensor platforms were deployed: a Freefly Alta X quadcopter equipped with a Real-Time Kinematic (RTK) Light Detection and Ranging (LiDAR, active sensor) and a WingtraOne UAS with Post-Processed Kinematic (PPK) multispectral imaging (optical, passive sensor). Three approaches were compared: (1) LiDAR; (2) Structure-from-Motion (SfM) photogrammetry with a Digital Surface Model (DSM) and Digital Terrain Model (DTM) (SfM–DTM); and (3) an SfM-derived DSM combined with a kriging-interpolated DTM (SfM–intDTM). An automated boundary detection workflow was developed, integrating slope thresholding, Near-Infrared (NIR) spectral filtering, and Canny edge detection. Volume estimates from SfM–DTM and SfM–intDTM closely matched LiDAR-based reference estimates, with Root Mean Square Error (RMSE) values of 147.51 m3 and 146.18 m3, respectively. The SfM–intDTM approach achieved a Mean Absolute Percentage Error (MAPE) of ~2%, indicating strong agreement with LiDAR and improved accuracy compared to prior studies. A sensitivity analysis further highlighted the role of spatial resolution in volume estimation. While RMSE values remained consistent (141–162 m3) and the MAPE below 2.5% for resolutions between 0.06 m and 5 m, accuracy declined at coarser resolutions, with the MAPE rising to 11.76% at 10 m. This emphasizes the need to balance the resolution with the study objectives, geographic extent, and computational costs when selecting elevation data for volume estimation. Overall, UAS-based SfM photogrammetry combined with interpolated DTMs and automated boundary extraction offers a scalable, cost-effective, and accurate approach for stockpile volume estimation. The methodology is well-suited for both the high-precision monitoring of individual stockpiles and broader regional-scale assessments and can be readily adapted to other domains such as quarrying, agricultural storage, and forestry operations. Full article
Show Figures

Figure 1

23 pages, 6105 KB  
Article
YUV Color Model-Based Adaptive Pansharpening with Lanczos Interpolation and Spectral Weights
by Shavkat Fazilov, Ozod Yusupov, Erali Eshonqulov, Khabiba Abdieva and Ziyodullo Malikov
Mathematics 2025, 13(17), 2868; https://doi.org/10.3390/math13172868 - 5 Sep 2025
Cited by 1 | Viewed by 1036
Abstract
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, [...] Read more.
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, a key challenge continues to be the maintenance of both spatial details and spectral accuracy in the combined image. To tackle this challenge, we introduce a new approach that enhances the component substitution-based Adaptive IHS method by integrating the YUV color model along with weighting coefficients influenced by the multispectral data. In our proposed approach, the conventional IHS color model is substituted with the YUV model to enhance spectral consistency. Additionally, Lanczos interpolation is used to upscale the MS image to match the spatial resolution of the PAN image. Each channel of the MS image is fused using adaptive weights derived from the influence of multispectral data, leading to the final pansharpened image. Based on the findings from experiments conducted on the PairMax and PanCollection datasets, our proposed method exhibited superior spectral and spatial performance when compared to several existing pansharpening techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

28 pages, 24868 KB  
Article
Deep Meta-Connectivity Representation for Optically-Active Water Quality Parameters Estimation Through Remote Sensing
by Fangling Pu, Ziang Luo, Yiming Yang, Hongjia Chen, Yue Dai and Xin Xu
Remote Sens. 2025, 17(16), 2782; https://doi.org/10.3390/rs17162782 - 11 Aug 2025
Viewed by 1071
Abstract
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on [...] Read more.
Monitoring optically-active water quality (OAWQ) parameters faces key challenges, primarily due to limited in situ measurements and the restricted availability of high-resolution multispectral remote sensing imagery. While deep learning has shown promise for OAWQ estimation, existing approaches such as GeoTile2Vec, which relies on geographic proximity, and SimCLR, a domain-agnostic contrastive learning method, fail to capture land cover-driven water quality patterns, limiting their generalizability. To address this, we present deep meta-connectivity representation (DMCR), which integrates multispectral remote sensing imagery with limited in situ measurements to estimate OAWQ parameters. Our approach constructs meta-feature vectors from land cover images to represent the water quality characteristics of each multispectral remote sensing image tile. We introduce the meta-connectivity concept to quantify the OAWQ similarity between different tiles. Building on this concept, we design a contrastive self-supervised learning framework that uses sets of quadruple tiles extracted from Sentinel-2 imagery based on their meta-connectivity to learn DMCR vectors. After the core neural network is trained, we apply a random forest model to estimate parameters such as chlorophyll-a (Chl-a) and turbidity using matched in situ measurements and DMCR vectors across time and space. We evaluate DMCR on Lake Erie and Lake Ontario, generating a series of Chl-a and turbidity distribution maps. Performance is assessed using the R2 and RMSE metrics. Results show that meta-connectivity more effectively captures water quality similarities between tiles than widely utilized geographic proximity approaches such as those used in GeoTile2Vec. Furthermore, DMCR outperforms baseline models such as SimCLR with randomly cropped tiles. The resulting distribution maps align well with known factors influencing Chl-a and turbidity levels, confirming the method’s reliability. Overall, DMCR demonstrates strong potential for large-scale OAWQ estimation and contributes to improved monitoring of inland water bodies with limited in situ measurements through meta-connectivity-informed deep learning. The temporal-spatial water quality maps can support large-scale inland water monitoring, early warning of harmful algal blooms. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

23 pages, 3875 KB  
Article
Soil Water-Soluble Ion Inversion via Hyperspectral Data Reconstruction and Multi-Scale Attention Mechanism: A Remote Sensing Case Study of Farmland Saline–Alkali Lands
by Meichen Liu, Shengwei Zhang, Jing Gao, Bo Wang, Kedi Fang, Lu Liu, Shengwei Lv and Qian Zhang
Agronomy 2025, 15(8), 1779; https://doi.org/10.3390/agronomy15081779 - 24 Jul 2025
Cited by 2 | Viewed by 1680
Abstract
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral [...] Read more.
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral ground-based data are valuable in soil salinization monitoring, but the acquisition cost is high, and the coverage is small. Therefore, this study proposes a two-stage deep learning framework with multispectral remote-sensing images. First, the wavelet transform is used to enhance the Transformer and extract fine-grained spectral features to reconstruct the ground-based hyperspectral data. A comparison of ground-based hyperspectral data shows that the reconstructed spectra match the measured data in the 450–998 nm range, with R2 up to 0.98 and MSE = 0.31. This high similarity compensates for the low spectral resolution and weak feature expression of multispectral remote-sensing data. Subsequently, this enhanced spectral information was integrated and fed into a novel multiscale self-attentive Transformer model (MSATransformer) to invert four water-soluble ions. Compared with BPANN, MLP, and the standard Transformer model, our model remains robust across different spectra, achieving an R2 of up to 0.95 and reducing the average relative error by more than 30%. Among them, for the strongly responsive ions magnesium and sulfate, R2 reaches 0.92 and 0.95 (with RMSE of 0.13 and 0.29 g/kg, respectively). For the weakly responsive ions calcium and carbonate, R2 stays above 0.80 (RMSE is below 0.40 g/kg). The MSATransformer framework provides a low-cost and high-accuracy solution to monitor soil salinization at large scales and supports precision farmland management. Full article
(This article belongs to the Special Issue Water and Fertilizer Regulation Theory and Technology in Crops)
Show Figures

Figure 1

31 pages, 4937 KB  
Article
Proximal LiDAR Sensing for Monitoring of Vegetative Growth in Rice at Different Growing Stages
by Md Rejaul Karim, Md Nasim Reza, Shahriar Ahmed, Kyu-Ho Lee, Joonjea Sung and Sun-Ok Chung
Agriculture 2025, 15(15), 1579; https://doi.org/10.3390/agriculture15151579 - 23 Jul 2025
Cited by 2 | Viewed by 1476
Abstract
Precise monitoring of vegetative growth is essential for assessing crop responses to environmental changes. Conventional methods of geometric characterization of plants such as RGB imaging, multispectral sensing, and manual measurements often lack precision or scalability for growth monitoring of rice. LiDAR offers high-resolution, [...] Read more.
Precise monitoring of vegetative growth is essential for assessing crop responses to environmental changes. Conventional methods of geometric characterization of plants such as RGB imaging, multispectral sensing, and manual measurements often lack precision or scalability for growth monitoring of rice. LiDAR offers high-resolution, non-destructive 3D canopy characterization, yet applications in rice cultivation across different growth stages remain underexplored, while LiDAR has shown success in other crops such as vineyards. This study addresses that gap by using LiDAR for geometric characterization of rice plants at early, middle, and late growth stages. The objective of this study was to characterize rice plant geometry such as plant height, canopy volume, row distance, and plant spacing using the proximal LiDAR sensing technique at three different growth stages. A commercial LiDAR sensor (model: VPL−16, Velodyne Lidar, San Jose, CA, USA) mounted on a wheeled aluminum frame for data collection, preprocessing, visualization, and geometric feature characterization using a commercial software solution, Python (version 3.11.5), and a custom algorithm. Manual measurements compared with the LiDAR 3D point cloud data measurements, demonstrating high precision in estimating plant geometric characteristics. LiDAR-estimated plant height, canopy volume, row distance, and spacing were 0.5 ± 0.1 m, 0.7 ± 0.05 m3, 0.3 ± 0.00 m, and 0.2 ± 0.001 m at the early stage; 0.93 ± 0.13 m, 1.30 ± 0.12 m3, 0.32 ± 0.01 m, and 0.19 ± 0.01 m at the middle stage; and 0.99 ± 0.06 m, 1.25 ± 0.13 m3, 0.38 ± 0.03 m, and 0.10 ± 0.01 m at the late growth stage. These measurements closely matched manual observations across three stages. RMSE values ranged from 0.01 to 0.06 m and r2 values ranged from 0.86 to 0.98 across parameters, confirming the high accuracy and reliability of proximal LiDAR sensing under field conditions. Although precision was achieved across growth stages, complex canopy structures under field conditions posed segmentation challenges. Further advances in point cloud filtering and classification are required to reliably capture such variability. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

20 pages, 5153 KB  
Article
A Practical Method for Red-Edge Band Reconstruction for Landsat Image by Synergizing Sentinel-2 Data with Machine Learning Regression Algorithms
by Yuan Zhang, Zhekui Fan, Wenjia Yan, Chentian Ge and Huasheng Sun
Sensors 2025, 25(11), 3570; https://doi.org/10.3390/s25113570 - 5 Jun 2025
Cited by 1 | Viewed by 2510
Abstract
Red-edge bands are the most essential spectral data for multispectral remote sensing images, with them playing a critical role in monitoring vegetation growth status at regional and global scales. However, the absence of red-edge bands limits the applicability of Landsat images, the most [...] Read more.
Red-edge bands are the most essential spectral data for multispectral remote sensing images, with them playing a critical role in monitoring vegetation growth status at regional and global scales. However, the absence of red-edge bands limits the applicability of Landsat images, the most widely used remote sensing data, to vegetation monitoring. This study proposes an innovative method to reconstruct Landsat’s red-edge bands. The consistency in corresponding bands of Landsat OLI and Sentinel-2 MSI was first investigated using different resampling approaches and atmospheric correction algorithms. Three machine learning algorithms (ridge regression, gradient boosted regression tree (GBRT), and random forest regression) were then employed to build the red-edge reconstruction model for different vegetation types. With the optimal model, three red-edge bands of Landsat OLI were subsequently obtained in alignment with their derived vegetation indices. Our results showed that bilinear interpolation resampling, in combination with the LaSRC atmospheric correction algorithm, achieved high consistency between the matching bands of OLI and MSI (R2 > 0.88). With the GBRT algorithm, three simulated OLI red-edge bands were highly consistent with those of MSI, with an R2 > 0.96 and an RMSE < 0.0122. The derived Landsat red-edge indices coincide with those of Sentinel-2, with an R2 of 0.78 to 0.95 and an rRMSE of 3.37% to 21.64%. This study illustrates that the proposed red-edge reconstruction method can extend the spectral domain of Landsat OLI and enhance its applicability in global vegetation remote sensing. Meanwhile, it provides potential insight into historical Landsat TM/ETM+ data enhancement for improving time-series vegetation monitoring. Full article
(This article belongs to the Special Issue Machine Learning in Image/Video Processing and Sensing)
Show Figures

Figure 1

22 pages, 9592 KB  
Article
Discovery of Large Methane Emissions Using a Complementary Method Based on Multispectral and Hyperspectral Data
by Xiaoli Cai, Yunfei Bao, Qiaolin Huang, Zhong Li, Zhilong Yan and Bicen Li
Atmosphere 2025, 16(5), 532; https://doi.org/10.3390/atmos16050532 - 30 Apr 2025
Cited by 1 | Viewed by 1966
Abstract
As global atmospheric methane concentrations surge at an unprecedented rate, the identification of methane super-emitters with significant mitigation potential has become imperative. In this study, we utilize remote sensing satellite data with varying spatiotemporal coverage and resolutions to detect and quantify methane emissions. [...] Read more.
As global atmospheric methane concentrations surge at an unprecedented rate, the identification of methane super-emitters with significant mitigation potential has become imperative. In this study, we utilize remote sensing satellite data with varying spatiotemporal coverage and resolutions to detect and quantify methane emissions. We exploit the synergistic potential of Sentinel-2, EnMAP, and GF5-02-AHSI for methane plume detection. Employing a matched filtering algorithm based on EnMAP and AHSI, we detect and extract methane plumes within emission hotspots in China and the United States, and estimate the emission flux rates of individual methane point sources using the IME model. We present methane plumes from industries such as oil and gas (O&G) and coal mining, with emission rates ranging from 1 to 40 tons per h, as observed by EnMAP and GF5-02-AHSI. For selected methane emission hotspots in China and the United States, we conduct long-term monitoring and analysis using Sentinel-2. Our findings reveal that the synergy between Sentinel-2, EnMAP, and GF5-02-AHSI enables the precise identification of methane plumes, as well as the quantification and monitoring of their corresponding sources. This methodology is readily applicable to other satellite instruments with coarse SWIR spectral bands, such as Landsat-7 and Landsat-8. The high-frequency satellite-based detection of anomalous methane point sources can facilitate timely corrective actions, contributing to the reduction in global methane emissions. This study underscores the potential of spaceborne multispectral imaging instruments, combining fine pixel resolution with rapid revisit rates, to advance the global high-frequency monitoring of large methane point sources. Full article
(This article belongs to the Special Issue Study of Air Pollution Based on Remote Sensing (2nd Edition))
Show Figures

Figure 1

8 pages, 3697 KB  
Proceeding Paper
Pansharpening Remote Sensing Images Using Generative Adversarial Networks
by Bo-Hsien Chung, Jui-Hsiang Jung, Yih-Shyh Chiou, Mu-Jan Shih and Fuan Tsai
Eng. Proc. 2025, 92(1), 32; https://doi.org/10.3390/engproc2025092032 - 28 Apr 2025
Cited by 1 | Viewed by 1239
Abstract
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN [...] Read more.
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN image while maintaining the spectral integrity of the MS image. To address this, this article presents a generative adversarial network (GAN)-based approach to pansharpening. The GAN discriminator facilitated matching the generated image’s intensity to the HR PAN image and preserving the spectral characteristics of the LR MS image. The performance in generating images was evaluated using the peak signal-to-noise ratio (PSNR). For the experiment, original LR MS and HR PAN satellite images were partitioned into smaller patches, and the GAN model was validated using an 80:20 training-to-testing data ratio. The results illustrated that the super-resolution images generated by the SRGAN model achieved a PSNR of 31 dB. These results demonstrated the developed model’s ability to reconstruct the geometric, textural, and spectral information from the images. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

Back to TopTop