Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (596)

Search Parameters:
Keywords = radiometric performance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 32230 KB  
Article
Structure-Aware Feature Descriptor with Multi-Scale Side Window Filtering for Multi-Modal Image Matching
by Junhong Guo, Lixing Zhao, Quan Liang, Xinwang Du, Yixuan Xu and Xiaoyan Li
Appl. Sci. 2026, 16(6), 3018; https://doi.org/10.3390/app16063018 - 20 Mar 2026
Abstract
Traditional image feature matching methods often fail to achieve satisfactory performance on multimodal remote sensing images (MRSIs), mainly due to significant nonlinear radiometric distortion (NRD) and complex geometric deformation caused by different imaging mechanisms. The key to successful MRSI matching lies in preserving [...] Read more.
Traditional image feature matching methods often fail to achieve satisfactory performance on multimodal remote sensing images (MRSIs), mainly due to significant nonlinear radiometric distortion (NRD) and complex geometric deformation caused by different imaging mechanisms. The key to successful MRSI matching lies in preserving high-frequency edge structures that are robust to geometric deformation, while overcoming nonlinear intensity mappings induced by NRD. To address these challenges, this paper proposes a novel high-precision matching framework, termed structure-aware feature descriptor with multi-scale side window filtering (SA-SWF). The proposed framework consists of three stages: (1) an anisotropic morphological scale space is constructed based on multi-scale side window filtering to strictly preserve geometric edges, and feature points are extracted using a multi-scale adaptive structure tensor with sub-pixel refinement to ensure high localization precision; (2) a structure-aware feature descriptor is constructed by integrating gradient reversal invariance and entropy-weighted attention mechanisms, rendering the multi-modal description highly robust against contrast inversion and noise; and (3) a coarse-to-fine robust matching strategy is established to progressively refine correspondences from descriptor-space matching to strict sub-pixel geometric verification, thereby minimizing alignment errors. Experiments on 60 multimodal image pairs from six categories, including infrared-infrared, optical–optical, infrared–optical, depth–optical, map–optical, and SAR–optical datasets, demonstrate that SA-SWF consistently outperforms seven state-of-the-art competitors. Across all six dataset categories, SA-SWF achieves a 100% success rate, the highest average number of correct matches (356.8), and the lowest average root mean square error (1.57 pixels). These results confirm the superior robustness, stability, and geometric accuracy of SA-SWF under severe radiometric and geometric distortions. Full article
Show Figures

Figure 1

29 pages, 5347 KB  
Article
Optimized Reinforcement Learning-Driven Model for Remote Sensing Change Detection
by Yan Zhao, Zhiyun Xiao, Tengfei Bao and Yulong Zhou
J. Imaging 2026, 12(3), 139; https://doi.org/10.3390/jimaging12030139 - 19 Mar 2026
Abstract
In recent years, deep learning has driven remarkable progress in remote sensing change detection (CD); however, practical deployment is still hindered by two limitations. First, CD results are easily degraded by imaging-induced uncertainties—mixed pixels and blurred boundaries, radiometric inconsistencies (e.g., shadows and seasonal [...] Read more.
In recent years, deep learning has driven remarkable progress in remote sensing change detection (CD); however, practical deployment is still hindered by two limitations. First, CD results are easily degraded by imaging-induced uncertainties—mixed pixels and blurred boundaries, radiometric inconsistencies (e.g., shadows and seasonal illumination changes), and slight residual misregistration—leading to pseudo-changes and fragmented boundaries. Second, prevailing methods follow a static one-pass inference paradigm and lack an explicit feedback mechanism for adaptive error correction, which weakens generalization in complex or unseen scenes. To address these issues, we propose a feedback-driven CD framework that integrates a dual-branch U-Net with deep reinforcement learning (RL) for pixel-level probabilistic iterative refinement of an initial change probability map. The backbone produces a preliminary posterior estimate of change likelihood from multi-scale bi-temporal features, while a PPO-based RL agent formulates refinement as a Markov decision process. The agent leverages a state representation that fuses multi-scale features, prediction confidence/uncertainty, and spatial consistency cues (e.g., neighborhood coherence and edge responses) to apply multi-step corrective actions. From an imaging and interpretation perspective, the RL module can be viewed as a learnable, self-adaptive imaging optimization mechanism: for high-risk regions affected by blurred boundaries, radiometric inconsistencies, and local misalignment, the agent performs feedback-driven multi-step corrections to improve boundary fidelity and spatial coherence while suppressing pseudo-changes caused by shadows and illumination variations. Experiments on four datasets (CDD, SYSU-CD, PVCD, and BRIGHT) verify consistent improvements. Using SiamU-Net as an example, the proposed RL refinement increases mIoU by 3.07, 2.54, 6.13, and 3.1 points on CDD, SYSU-CD, PVCD, and BRIGHT, respectively, with similarly consistent gains observed when the same RL module is integrated into other representative CD backbones. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

40 pages, 5583 KB  
Article
Traceable Time-Domain Photovoltaic Module Modeling with Plane-of-Array Irradiance and Solar Geometry Coupling: White-Box Simulink Implementation and Experimental Validation
by Ciprian Popa, Florențiu Deliu, Adrian Popa, Narcis Octavian Volintiru, Andrei Darius Deliu, Iancu Ciocioi and Petrică Popov
Energies 2026, 19(6), 1437; https://doi.org/10.3390/en19061437 - 12 Mar 2026
Viewed by 176
Abstract
Accurate time-domain photovoltaic (PV) models are needed to evaluate performance under outdoor variability beyond STC datasheet conditions. This paper presents a traceable modeling workflow based on the standard single-diode formulation, implemented in MATLAB/Simulink (R2023a) as a modular white-box architecture that explicitly resolves photocurrent [...] Read more.
Accurate time-domain photovoltaic (PV) models are needed to evaluate performance under outdoor variability beyond STC datasheet conditions. This paper presents a traceable modeling workflow based on the standard single-diode formulation, implemented in MATLAB/Simulink (R2023a) as a modular white-box architecture that explicitly resolves photocurrent generation and loss mechanisms (diode recombination, shunt leakage, and series resistance effects) with temperature-consistent propagation through VT(T) and saturation-current terms. The method couples optical boundary conditions to the electrical model by embedding plane-of-array (POA) excitation via the incidence angle θ(t) and roof albedo directly into the photocurrent source term, preserving the causal chain from mounting geometry to electrical response. Calibration is separated from prediction by initializing key parameters using the standard Simulink PV block and then freezing them for time-domain evaluation. The workflow is validated on a 395 W rooftop prototype using 1 min resolved POA irradiance (ISO 9060:2018 Class A radiometric chain) and module temperature (IEC 60751 Class A Pt100), synchronized with electrical measurements. Over a multi-week campaign, the model exhibits high fidelity, with a worst-case relative current error of ~1.1% and a consistently low bias and dispersion, quantified by ME, MAE, RMSE, σe, and thresholded MAPE. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

21 pages, 4639 KB  
Article
Deep Learning-Based Real-Time Vehicle Tire and Tank Temperature Monitoring Using Thermal Cameras
by Yaoyao Hu, Jiaxin Li, Chuanyi Ma, Shuai Cheng, Ruolin Zheng and Xingang Zhang
Appl. Sci. 2026, 16(6), 2656; https://doi.org/10.3390/app16062656 - 11 Mar 2026
Viewed by 158
Abstract
Ensuring the driving safety of hazardous chemical vehicles is a critical priority. High temperatures in tires and tanks can lead to catastrophic accidents, including fires and road damage, particularly in bridge and tunnel sections. Therefore, the purpose of this study is to utilize [...] Read more.
Ensuring the driving safety of hazardous chemical vehicles is a critical priority. High temperatures in tires and tanks can lead to catastrophic accidents, including fires and road damage, particularly in bridge and tunnel sections. Therefore, the purpose of this study is to utilize deep learning to obtain the temperature of vehicle tires and tanks in real time. We constructed a comprehensive dataset by combining the FLIR infrared vehicle dataset, the SPT visible tire dataset, and self-collected thermal video frames captured in various environments. State-of-the-art object detection models, including different scales of YOLOv8, YOLOv9, and YOLOv10, were evaluated for the multi-target detection of vehicles, tires, and tanks. Comparative analysis reveals that the YOLOv8-L model optimized with the GIoU loss function delivers the best performance. Specifically, it achieves a mean Average Precision (mAP) of 97.9% with an average inference time of 6.9 ms per frame, effectively balancing accuracy and real-time efficiency. Finally, by mapping the detection bounding boxes to the radiometric temperature matrix, the system achieves precise, real-time temperature monitoring of the vehicle components. Full article
Show Figures

Figure 1

24 pages, 87005 KB  
Article
Filling the Gap: Elevation-Based Sentinel-1 Surface Soil Moisture Retrieval over the Austrian Alps
by Samuel Massart, Mariette Vreugdenhil, Juraj Parajka, Carina Villegas-Lituma, Ignacio Borlaf-Mena, Patrik Sleziak and Wolfgang Wagner
Remote Sens. 2026, 18(6), 855; https://doi.org/10.3390/rs18060855 - 10 Mar 2026
Viewed by 240
Abstract
As climate change increasingly impacts the water cycle across the Alpine region, monitoring surface soil moisture is essential for hydrological models and drought early warning. Yet operational products either mask steep terrain, or lack the spatial resolution to capture the surface soil moisture [...] Read more.
As climate change increasingly impacts the water cycle across the Alpine region, monitoring surface soil moisture is essential for hydrological models and drought early warning. Yet operational products either mask steep terrain, or lack the spatial resolution to capture the surface soil moisture (SSM) spatial variability of the Alpine catchments. This study presents a novel retrieval approach aggregating Sentinel-1 radiometric terrain-corrected backscatter (γ0) into 100 m elevation bands per sub-basin and aspect across the Austrian Alps. The resulting Alpine backscatter product is processed through an orbit-wise change detection to derive over 34,000 SSM timeseries, evaluated using ERA5-Land and compared to 264 precipitation stations from Geosphere for the period from 2016 to 2024. The results show satisfactory agreement with ERA5-Land (Pearson correlation > 0.46 below 400 m) and capture in situ precipitation-driven anomalies with the strongest performance below 400 m (Spearman correlation > 0.47), particularly over grasslands and south-facing slopes. Despite its limitations at high elevation and over dense vegetation, Sentinel-1 provides consistent and elevation-stratified information across more than 80% of the Austrian Alps, typically excluded from operational products. The new Alpine SSM product highlights Sentinel-1’s potential to support hydrological modeling, drought monitoring, and water resource management across complex topography such as the Alps. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

28 pages, 6157 KB  
Article
RI-DVP: A Physics–Geometry Dual-Driven Framework for Static Map Construction in Sparse LiDAR Scenarios
by Xiaokai Li, Li Wang, Haolong Luo and Guangyun Li
Remote Sens. 2026, 18(5), 821; https://doi.org/10.3390/rs18050821 - 6 Mar 2026
Viewed by 260
Abstract
High-fidelity static map construction is essential for reliable autonomous navigation, yet dynamic environments introduce severe artifacts caused by moving objects (also referred to as dynamic artifacts) in accumulated maps. While geometry-based methods perform well on dense point clouds, their performance notably degrades on [...] Read more.
High-fidelity static map construction is essential for reliable autonomous navigation, yet dynamic environments introduce severe artifacts caused by moving objects (also referred to as dynamic artifacts) in accumulated maps. While geometry-based methods perform well on dense point clouds, their performance notably degrades on sparse 16-beam LiDAR due to the “Sparsity Trap”: dynamic objects are frequently missed by ray-based geometry, and purely geometric cues fail in radiometrically ambiguous scenarios. To address this, we propose RI-DVP, a physics–geometry dual-driven framework. Unlike conventional approaches, RI-DVP first performs a physics-inspired radiometric normalization that compensates for range attenuation and incidence-angle effects to establish a consistent signal baseline. Subsequently, a Dual-Residual Aggressive Removal (DRAR) module jointly exploits geometric residuals—bounded by a range-dependent spatial uncertainty envelope—and calibrated intensity residuals to detect geometrically indistinguishable objects. To balance recall and precision, a Hierarchical Static Reversion strategy (HSR) employs two-stage recovery to retrieve large-scale structures and correct fine-grained artifacts via topology-based adhesion reasoning. Experiments on SemanticKITTI and custom sparse datasets demonstrate that RI-DVP outperforms state-of-the-art geometric baselines, improving Dynamic Accuracy by over 36 percentage points in sparse scanning scenarios using a VLP-16 LiDAR sensor (Velodyne Acoustics, Inc., Morgan Hill, CA, USA) compared to baselines that fail under the sparsity trap while achieving real-time performance at approximately 15.3 Hz. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

19 pages, 4890 KB  
Article
MTA-Dataset: Multiple-Tilt-Angle Dataset for UAV–Satellite Image Matching
by Qifei Liu, Liang Jiang, Guoqiang Wu, Kun Huang, Haohui Sun and Gengchen Liu
Appl. Sci. 2026, 16(5), 2488; https://doi.org/10.3390/app16052488 - 4 Mar 2026
Viewed by 356
Abstract
Accurate target localization via matching real-time UAV images with reference satellite imagery is essential for autonomous environmental perception. Nonetheless, operational constraints and weather conditions often necessitate oblique photography. This large-tilt mode causes significant perspective and radiometric distortions, resulting in a substantial domain gap [...] Read more.
Accurate target localization via matching real-time UAV images with reference satellite imagery is essential for autonomous environmental perception. Nonetheless, operational constraints and weather conditions often necessitate oblique photography. This large-tilt mode causes significant perspective and radiometric distortions, resulting in a substantial domain gap between UAV and vertical satellite imagery. The scarcity of datasets featuring extreme viewpoint shifts and fine-grained ground-truth labels hinders the validation of image matching algorithms in multi-tilt-angle environments. To address this issue, we introduce the multiple-tilt-angle dataset (MTA-Dataset), containing 1892 UAV images with tilt angles spanning 0°,90° and flight altitudes up to 300 m, supported by high-precision five-point manual annotations. Based on this benchmark, we evaluate state-of-the-art matching algorithms and propose a spatial-resolution-based cropping strategy. Experimental results demonstrate that, as the UAV tilt angle increases within the range of 0°,90°, although the expanding field of view provides richer contextual information, the localization errors of all methods increase significantly and matching precision drops sharply due to severe geometric distortions in far-field regions and interference from redundant background information, with performance deteriorating most drastically in the 50°,90° range. With the integration of our strategy, the average matching localization errors of SuperPoint + SuperGlue baseline for UAV images within the tilt-angle ranges of 50°,60°, 60°,70°, 70°,80°, and 80°,90° are reduced by 33.49 m, 37.86 m, 98.3 m, and 109.95 m, respectively. Our study provides a more comprehensive evaluation framework for robust UAV–satellite image matching algorithms in multi-tilt-angle scenarios. Full article
Show Figures

Figure 1

25 pages, 5609 KB  
Article
Design and In-Orbit Validation of a Novel Compact Bidirectional Trapezoidal Reflector for X-Band Spaceborne SAR Absolute Radiometric Calibration
by Shiyu Sun, Yu Wang, Huijuan Li and Xin Zhang
Remote Sens. 2026, 18(5), 770; https://doi.org/10.3390/rs18050770 - 3 Mar 2026
Viewed by 182
Abstract
Spaceborne synthetic aperture radar (SAR) absolute radiometric calibration relies on point targets with a known radar cross-section (RCS), such as triangular trihedral corner reflectors (TTCRs). Traditionally, radiometric calibration using TTCRs requires precise alignment of the corner reflector (CR) boresight to the radar line-of-sight [...] Read more.
Spaceborne synthetic aperture radar (SAR) absolute radiometric calibration relies on point targets with a known radar cross-section (RCS), such as triangular trihedral corner reflectors (TTCRs). Traditionally, radiometric calibration using TTCRs requires precise alignment of the corner reflector (CR) boresight to the radar line-of-sight (LOS), leading to frequent field operations and high labor dependency. In this study, a novel compact bidirectional trapezoidal CR is proposed to eliminate such alignment reorientations. The novel CR adopts three design considerations: a scalene shape to optimize the boresight elevation angle and enhance the peak RCS; a bidirectional configuration with azimuth fine-tuning to align with the radar LOS for both ascending and descending passes; and trapezoidal plate trimming to reduce the volume and weight without sacrificing RCS performance. An in-orbit validation is conducted in Xi’an, China, using the SuperView Neo 2-03 satellite. The results demonstrate that the imaging quality of the bidirectional trapezoidal CRs is comparable to that of conventional TTCRs, with all the parameters meeting system specifications. The radiometric calibration constant of the bidirectional trapezoidal CR differs from that of the conventional TTCR by no more than 0.27 dB, with a total uncertainty of ~0.33 dB (1σ)—demonstrating that it achieves equivalent radiometric calibration accuracy to TTCRs. The experiment confirms the feasibility and engineering applicability of the bidirectional trapezoidal CR for X-band SAR radiometric calibration. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Figure 1

28 pages, 11762 KB  
Article
A Coarse-to-Fine Optical-SAR Image Registration Algorithm for UAV-Based Multi-Sensor Systems Using Geographic Information Constraints and Cross-Modal Feature Consistency Mapping
by Xiaoyong Sun, Zhen Zuo, Xiaojun Guo, Xuan Li, Peida Zhou, Runze Guo and Shaojing Su
Remote Sens. 2026, 18(5), 683; https://doi.org/10.3390/rs18050683 - 25 Feb 2026
Viewed by 250
Abstract
Optical and synthetic aperture radar (SAR) image registration faces challenges from nonlinear radiometric distortions and geometric deformations caused by different imaging mechanisms. This paper proposes a coarse-to-fine registration algorithm integrating geographic information constraints with cross-modal feature consistency mapping. The coarse stage employs imaging [...] Read more.
Optical and synthetic aperture radar (SAR) image registration faces challenges from nonlinear radiometric distortions and geometric deformations caused by different imaging mechanisms. This paper proposes a coarse-to-fine registration algorithm integrating geographic information constraints with cross-modal feature consistency mapping. The coarse stage employs imaging geometry-based coordinate transformation with airborne navigation data to eliminate scale and rotation differences. The fine stage constructs a multi-scale phase congruency-based feature response aggregation model combined with rotation-invariant descriptors and global-to-local search for sub-pixel alignment. Experiments on integrated airborne optical/SAR datasets demonstrate superior performance with an average RMSE of 2.00 pixels, outperforming both traditional handcrafted methods (3MRS, OS-SIFT, POS-GIFT, GLS-MIFT) and state-of-the-art deep learning approaches (SuperGlue, LoFTR, ReDFeat, SAROptNet) while reducing execution time by 37.0% compared with the best-performing baseline. The proposed coarse registration also serves as an effective preprocessing module that improves SuperGlue’s matching rate by 167% and LoFTR’s by 109%, with a hybrid refinement strategy achieving 1.95 pixels RMSE. The method demonstrates robust performance under challenging conditions, enabling real-time UAV-based multi-sensor fusion applications. Full article
Show Figures

Figure 1

22 pages, 6011 KB  
Article
Remote Sensing for Vegetation Monitoring: Insights of a Cross-Platform Coherence Evaluation
by Eduardo R. Oliveira, Tiago van der Worp da Silva, Luísa M. Gomes Pereira, Nuno Vaz, Jan Jacob Keizer and Bruna R. F. Oliveira
Land 2026, 15(2), 306; https://doi.org/10.3390/land15020306 - 11 Feb 2026
Viewed by 282
Abstract
Remote sensing has revolutionized monitoring landscapes that are inaccessible or impractical to survey on the ground. Satellite platforms such as Sentinel-2 enable assessment of ecosystem changes over extensive areas with high temporal frequency, while Unmanned Aerial Systems (UAS) offer flexible, ultra-high-resolution observations ideal [...] Read more.
Remote sensing has revolutionized monitoring landscapes that are inaccessible or impractical to survey on the ground. Satellite platforms such as Sentinel-2 enable assessment of ecosystem changes over extensive areas with high temporal frequency, while Unmanned Aerial Systems (UAS) offer flexible, ultra-high-resolution observations ideal for site-specific analysis and sensitive environments. This study compares the performance of Sentinel-2 and Phantom 4 multispectral RTK data for monitoring vegetation dynamics in Mediterranean shrubland ecosystems, focusing on the Normalized Difference Vegetation Index (NDVI). Both platforms produced broadly consistent patterns in seasonal and interannual vegetation dynamics. However, UAS outperformed satellite data in capturing fine-scale heterogeneity, regeneration patches, and subtle disturbance responses, particularly in sparsely vegetated or heterogeneous terrain where satellite metrics may be insensitive. The comparison of NDVI across platforms accounted for standardized processing, harmonization, radiometric and atmospheric correction, and spatial resolution differences. Results show platform selection can be optimized according to monitoring objectives: satellite data are well suited for long-term monitoring of landscape-level vegetation dynamics, as both platforms capture consistent patterns when evaluated at comparable, spatially aggregated scales, while UAS data provide critical detail for localized management, early stress detection, and restoration prioritization by resolving fine-scale features. A combined approach enhances ecosystem disturbance assessments and resource management by binding the strengths of both wide-area coverage and precise spatial detail. Full article
Show Figures

Figure 1

15 pages, 3953 KB  
Article
Age Prediction of Hematoma from Hyperspectral Images Using Convolutional Neural Networks
by Arash Keshavarz, Gerald Bieber, Daniel Wulff, Carsten Babian and Stefan Lüdtke
J. Imaging 2026, 12(2), 78; https://doi.org/10.3390/jimaging12020078 - 11 Feb 2026
Viewed by 427
Abstract
Accurate estimation of hematoma age remains a major challenge in forensic practice, as current assessments rely heavily on subjective visual interpretation. Hyperspectral imaging (HSI) captures rich spectral signatures that may reflect the biochemical evolution of hematomas over time. This study evaluates whether a [...] Read more.
Accurate estimation of hematoma age remains a major challenge in forensic practice, as current assessments rely heavily on subjective visual interpretation. Hyperspectral imaging (HSI) captures rich spectral signatures that may reflect the biochemical evolution of hematomas over time. This study evaluates whether a convolutional neural network (CNN) integrating both spectral and spatial information improves hematoma age estimation accuracy. Additionally, we investigate whether performance can be maintained using a reduced, physiologically motivated subset of wavelengths. Using a dataset of forearm hematomas from 25 participants, we applied radiometric normalization and SAM-based segmentation to extract 64×64×204 hyperspectral patches. In leave-one-subject-out cross-validation, the CNN outperformed a spectral-only Lasso baseline, reducing the mean absolute error (MAE) from 3.24 days to 2.29 days. Band-importance analysis combining SmoothGrad and occlusion sensitivity identified 20 highly informative wavelengths; using only these bands matched or exceeded the accuracy of the full 204-band model across early, middle, and late hematoma stages. These results demonstrate that spectral–spatial modeling and physiologically grounded band selection can enhance estimation accuracy while significantly reducing data dimensionality. This approach supports the development of compact multispectral systems for objective clinical and forensic evaluation. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

21 pages, 13894 KB  
Article
Forecasting Spring Wheat Maturity from UAV-Based Multispectral Imagery Using Machine and Deep Learning Models
by Prabahar Ravichandran, Keshav D. Singh, Harpinder S. Randhawa and Shubham Subrot Panigrahi
AgriEngineering 2026, 8(2), 62; https://doi.org/10.3390/agriengineering8020062 - 10 Feb 2026
Cited by 1 | Viewed by 497
Abstract
Accurate forecasting of crop maturity supports efficient harvest planning and accelerates selection decisions in breeding programs. In spring wheat, maturity is typically assessed through manual scoring late in the season, which limits its usefulness for timely harvest management and early selection decisions in [...] Read more.
Accurate forecasting of crop maturity supports efficient harvest planning and accelerates selection decisions in breeding programs. In spring wheat, maturity is typically assessed through manual scoring late in the season, which limits its usefulness for timely harvest management and early selection decisions in breeding programs. This study evaluated uncrewed aerial vehicle (UAV)–based multispectral imagery for forecasting maturity in spring wheat grown at Lethbridge, Alberta (AB), Canada, during the 2024 and 2025 growing seasons. Thirty cultivars were monitored using seven-band UAV multispectral imagery during grain filling, enabling derivation of core vegetation and senescence-related indices from radiometrically calibrated orthomosaics. Strong correlations (|r|>0.85) were observed between vegetation indices and days remaining to maturity (DRTM), motivating baseline regression models and subsequent evaluation of eleven machine-learning and deep-learning approaches. Among these, support vector regression (SVR) and multi-layer perceptron (MLP) achieved the highest predictive accuracy (R2=0.950.96; mean absolute error (MAE) 1.25 days). Deep learning models achieved performance comparable to machine-learning approaches; however, incorporating spatial information through convolutional neural networks did not improve prediction accuracy. Feature-attribution analysis identified the red, red-edge (RE), and near-infrared (NIR) spectral bands as key predictors, enabling non-destructive, early, and scalable UAV-based maturity forecasting. Full article
Show Figures

Figure 1

20 pages, 3611 KB  
Article
From [99mTc]pertechnetate to [99mTc]sestamibi: Dissection of a Complex Reaction Sequence Using Radio-LC-MS
by Joana do Mar Ferreira Machado, Antonio Shegani, Ingebjørg N. Hungnes, Truc T. Pham, Amaia Carrascal-Miniño, Margaret S. Cooper, Victoria Gibson, Levente K. Meszaros, Michelle T. Ma and Philip J. Blower
Molecules 2026, 31(4), 596; https://doi.org/10.3390/molecules31040596 - 9 Feb 2026
Viewed by 471
Abstract
[99mTc]sestamibi ([99mTc][Tc(MIBI)6]+; MIBI = 2-methoxybutylisonitrile) is a clinically established myocardial perfusion SPECT tracer. Its one-pot kit-based synthesis from [99mTc]pertechnetate ([99mTc][TcO4]) is complex, involving a 6-oxidation state transition [...] Read more.
[99mTc]sestamibi ([99mTc][Tc(MIBI)6]+; MIBI = 2-methoxybutylisonitrile) is a clinically established myocardial perfusion SPECT tracer. Its one-pot kit-based synthesis from [99mTc]pertechnetate ([99mTc][TcO4]) is complex, involving a 6-oxidation state transition (Tc(VII) to Tc(I)) and complete ligand replacement. We aimed to unravel this complex reaction, to inform rational quality control and identify new technetium synthons for molecular imaging. Generator-produced [99mTc]pertechnetate was added to commercial or bespoke clinically used kits, varying the reaction time, temperature, and concentrations of reagents (individually and collectively) and carrier technetium-99. Radioactive products were analysed by thin-layer chromatography (TLC) and high-performance liquid chromatography (HPLC) with optical, radiometric, and mass spectrometric (MS-ESI+) detection. At least 11 radioactive intermediates were detected by radio-HPLC. Technetium(V) and technetium(I) intermediates were identified or imputed by radio-HPLC-MS, including [TcVO(cysteinate)2]+, [TcI(MIBI)4L2]+, and [TcI(MIBI)5L1]+ (L = labile monodentate ligand, e.g., H2O). Tc(III) intermediates [TcIII(cysteinate)2(MIBI)]+ and [TcIII(cysteinate)2(MIBI)2]+ were indicated by weak MS-ESI+ ions. We conclude that the reaction proceeds via reduction from [TcVIIO4] via unknown intermediates to [TcVO(cysteinate)2]+, then via Tc(III) intermediates containing both cysteinate and MIBI ligands (e.g., [TcIII(cysteinate)2(MIBI)2]+), to form Tc(I) without cysteine and with <6 MIBI ligands, followed by further ligand displacement by MIBI to form [Tc(MIBI)6]+. Once formed, [Tc(MIBI)6]+ undergoes no further reaction. Full article
(This article belongs to the Special Issue New Advances in Radiopharmaceutical Sciences, 2nd Edition)
Show Figures

Graphical abstract

21 pages, 5303 KB  
Article
A Mirror-Reflection Method for Measuring Microwave Emissivity of Flat Scenes with Ground-Based Radiometers
by Shilin Li, Taoyun Zhou, Yun Cheng, Yiming Xu, Xiaokang Mei, Jieqia Chen and Hailiang Lu
Remote Sens. 2026, 18(2), 341; https://doi.org/10.3390/rs18020341 - 20 Jan 2026
Viewed by 207
Abstract
Accurate brightness temperature (TB) measurements and microwave emissivity retrieval in passive microwave sensing conventionally rely on absolute radiometric calibration, which often requires additional hardware and complex procedures. Under well-defined geometric and environmental conditions, this study proposes a mirror-reflection-based method for measuring the microwave [...] Read more.
Accurate brightness temperature (TB) measurements and microwave emissivity retrieval in passive microwave sensing conventionally rely on absolute radiometric calibration, which often requires additional hardware and complex procedures. Under well-defined geometric and environmental conditions, this study proposes a mirror-reflection-based method for measuring the microwave emissivity of flat scenes using ground-based radiometers without conventional absolute calibration. The method employs a simplified four-step observation sequence, in which the radiometer measures the pure flat scene, the flat scene with mirror reflection, the reference wall, and the cold sky. A geometric model is developed to determine the effective incidence-angle range, and an analytical framework is developed to evaluate retrieval accuracy. Numerical simulations are conducted to examine the effects of scene material, reference-wall property, operating frequency, polarization, and radiometric sensitivity. Outdoor experiments are further performed to assess feasibility under practical measurement conditions. The results show that, within moderate incidence-angle ranges and under stable radiometric conditions, the retrieved emissivities of flat scenes agree well with theoretical predictions. These findings indicate that the proposed mirror-reflection-based approach provides a feasible supplementary or alternative solution for emissivity estimation of flat targets in ground-based measurements when absolute calibration is unavailable or impractical, rather than a replacement for conventional calibration techniques. Full article
Show Figures

Graphical abstract

32 pages, 8079 KB  
Article
Daytime Sea Fog Detection in the South China Sea Based on Machine Learning and Physical Mechanism Using Fengyun-4B Meteorological Satellite
by Jie Zheng, Gang Wang, Wenping He, Qiang Yu, Zijing Liu, Huijiao Lin, Shuwen Li and Bin Wen
Remote Sens. 2026, 18(2), 336; https://doi.org/10.3390/rs18020336 - 19 Jan 2026
Viewed by 436
Abstract
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition [...] Read more.
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition method has been lacking. A key obstacle is the radiometric inconsistency between the Advanced Geostationary Radiation Imager (AGRI) sensors on FY-4A and FY-4B, compounded by the cessation of Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) observations, which prevents direct transfer of fog labels. To address these challenges and fill this research gap, we propose a machine learning framework that integrates cross-satellite radiometric recalibration and physical mechanism constraints for robust daytime sea fog detection. First, we innovatively apply a radiation recalibration transfer technique based on the radiative transfer model to normalize FY-4A/B radiances and, together with Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/fog classification products and ERA5 reanalysis, construct a highly consistent joint training set of FY-4A/B for the winter-spring seasons since 2019. Secondly, to enhance the model’s physical performance, we incorporate key physical parameters related to the sea fog formation process (such as temperature inversion, near-surface humidity, and wind field characteristics) as physical constraints, and combine them with multispectral channel sensitivity and the brightness temperature (BT) standard deviation that characterizes texture smoothness, resulting in an optimized 13-dimensional feature matrix. Using this, we optimize the sea fog recognition model parameters of decision tree (DT), random forest (RF), and support vector machine (SVM) with grid search and particle swarm optimization (PSO) algorithms. The validation results show that the RF model outperforms others with the highest overall classification accuracy (0.91) and probability of detection (POD, 0.81) that surpasses prior FY-4A-based work for the South China Sea (POD 0.71–0.76). More importantly, this study demonstrates that the proposed FY-4B framework provides reliable technical support for operational, continuous sea fog monitoring over the South China Sea. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Back to TopTop