Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (400)

Search Parameters:
Keywords = panchromatic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8878 KB  
Article
A Spectrally Compatible Pseudo-Panchromatic Intensity Reconstruction for PCA-Based UAS RGB–Multispectral Image Fusion
by Dimitris Kaimaris
J. Imaging 2026, 12(3), 122; https://doi.org/10.3390/jimaging12030122 - 11 Mar 2026
Viewed by 293
Abstract
The paper presents a method for generating a pseudo-panchromatic (PPAN) orthophotomosaic that is spectrally compatible with the multispectral (MS) orthophotomosaic, and it targets the fusion of unmanned aircraft system (UAS) RGB–MS orthophotomosaics when no true panchromatic band is available. In typical UAS imaging [...] Read more.
The paper presents a method for generating a pseudo-panchromatic (PPAN) orthophotomosaic that is spectrally compatible with the multispectral (MS) orthophotomosaic, and it targets the fusion of unmanned aircraft system (UAS) RGB–MS orthophotomosaics when no true panchromatic band is available. In typical UAS imaging systems, RGB and multispectral sensors operate independently and exhibit different spectral responses and spatial resolutions, making the construction of a spectrally compatible substitution intensity a critical challenge for component substitution fusion. The conventional RGB-derived PPAN preserves spatial detail but is constrained by RGB–MS spectral incompatibility, expressed as reduced corresponding-band similarity. The proposed hybrid intensity (PPANE) increases the mean corresponding-band correlation from 0.842 (PPANA) to 0.928 (PPANE) and reduces the across-site mean SAM from 5.782° to 4.264°, while maintaining spatial sharpness comparable to the RGB-derived intensity. It is proposed that the PPANE orthophotomosaic be produced as a hybrid intensity (single band) image. Specifically, a multispectral-visible-derived intensity is resampled onto the RGB grid and statistically integrated with RGB spatial detail, followed by mild high-frequency enhancement to produce the final PPANE orthophotomosaic. Principal Component Analysis (PCA) fusion is applied to seven archaeological sites in Northern Greece. Spectral quality is evaluated on the MS grid using band-wise (corresponding-band) correlation and the Spectral Angle Mapper (SAM), while the spatial sharpness of the fused NIR orthophotomosaic is assessed using Tenengrad and Laplacian variance. The PPANE orthophotomosaic consistently increases correlations relative to PPANA (especially in Red Edge/NIR) and reduces the mean site-mean SAM. PPANC yields the lowest SAM but also the lowest spatial sharpness/clarity, whereas PPANE maintains spatial sharpness/clarity comparable to PPANA, supporting a balance between spectral consistency and spatial detail, as also confirmed through comparative evaluation against established component substitution fusion methods. The approach is reproducible and avoids full histogram matching; instead, it relies on explicitly defined linear standardization steps (mean–std normalization) and controlled spatial sharpening, and performs consistently across different scenes. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

12 pages, 2048 KB  
Article
Violet Anthraquinone for Expanding the Color Palette of Electrochromes with Three Discrete Colors and Full Color Bleaching
by Ilies Seddiki, Thierry Maris and W. G. Skene
Molecules 2026, 31(5), 879; https://doi.org/10.3390/molecules31050879 - 6 Mar 2026
Viewed by 362
Abstract
An anthraquinone chromophore displaying a vivid violet color in solution was synthesized and it was thoroughly characterized both spectroscopically and electrochemically, along with its X-ray crystallography. Single crystal X-ray analysis of the chromophore revealed a nearly planar π-conjugated framework with short intermolecular contacts. [...] Read more.
An anthraquinone chromophore displaying a vivid violet color in solution was synthesized and it was thoroughly characterized both spectroscopically and electrochemically, along with its X-ray crystallography. Single crystal X-ray analysis of the chromophore revealed a nearly planar π-conjugated framework with short intermolecular contacts. Cyclic voltammetry revealed two consecutive one-electron reductions, corresponding to the formation of its radical anion and dianion. The spectroelectrochemistry of the chromophore confirmed two distinct and reversible color changes with the stepwise electrochemical reduction. These were quantified via the CIE L a* b* color space. Large optical differences (98%) between the bleached and colored states were observed along with a coloration efficiency of 698 cm2/C. These parameters confirm the anthraquinone is an ideal electrochrome: capable of reversibly switching its colors with applied potential. The three color changes and color bleaching associated with the neutral, radical anion, dianion, and cation, respectively, are also of interest for extending the palette of colors of molecular electrochromes toward panchromatic color tuning with molecular structure for use in smart windows and displays. Full article
(This article belongs to the Special Issue Advances in Dyes and Photochromics)
Show Figures

Graphical abstract

16 pages, 4066 KB  
Article
A Novel ResUNet Architecture for Thin Cloud and Boundary Detection in Landsat 8 Remote Sensing Imagery
by Hao Huang, Xiaofang Liu, Chi Yang and Aimin Liu
Appl. Sci. 2026, 16(4), 2122; https://doi.org/10.3390/app16042122 - 22 Feb 2026
Viewed by 328
Abstract
To address the challenges of thin cloud detection and imprecise cloud boundary segmentation in Landsat 8 remote sensing imagery, this paper proposes a systematic approach that comprehensively enhances cloud detection accuracy from data preprocessing to network architecture optimisation. First, through empirical analysis, an [...] Read more.
To address the challenges of thin cloud detection and imprecise cloud boundary segmentation in Landsat 8 remote sensing imagery, this paper proposes a systematic approach that comprehensively enhances cloud detection accuracy from data preprocessing to network architecture optimisation. First, through empirical analysis, an optimised band input combination was determined (removing the panchromatic Band 8 and thermal infrared Band 11), effectively suppressing urban background noise. Subsequently, an enhanced ResUNet model was designed, innovatively integrating an Atrous Spatial Pyramid Pooling (ASPP) module with an attention gate (AG) mechanism. The ASPP module enhances detection capabilities for thin clouds and diffuse cloud masses by aggregating multi-scale global contextual information. The attention-gated mechanism finely tunes feature fusion during the decoding phase, suppressing interference from highly reflective surface features to achieve precise cloud boundary segmentation. Experiments conducted on the Landsat 8 dataset featuring typical urban scenes demonstrate that the proposed method significantly outperforms mainstream models across both conventional and boundary-specific metrics, achieving an overall accuracy (OA) of 0.9717, a mean intersection over union (mIoU) of 0.8102, and, notably, a mean bounding box intersection over union (mB-IoU) of 0.4154 and a mean bounding box F1 score of 0.5356, representing improvements of 16.3% and 12.5%, respectively, over existing methods. This research provides an efficient and robust technical framework for cloud detection tasks in complex urban environments, laying the foundation for high-precision processing of remote sensing imagery and subsequent quantitative analysis. Full article
Show Figures

Figure 1

21 pages, 5131 KB  
Article
Design and Characterization of a Hyperspectral Colposcope Based on Dual-LCTF VNIR Narrow-Band Illumination
by Carlos Vega, Raquel Leon, Norberto Medina, Himar Fabelo, Alicia Martín and Gustavo M. Callico
Sensors 2026, 26(4), 1255; https://doi.org/10.3390/s26041255 - 14 Feb 2026
Viewed by 363
Abstract
Early detection of precancerous cervical lesions is critical for improving patient management and clinical outcomes. Hyperspectral imaging has emerged as a promising non-invasive, label-free imaging modality for rapid medical diagnosis. This work presents the development of a liquid-crystal-tunable-filter-based hyperspectral colposcopy system covering the [...] Read more.
Early detection of precancerous cervical lesions is critical for improving patient management and clinical outcomes. Hyperspectral imaging has emerged as a promising non-invasive, label-free imaging modality for rapid medical diagnosis. This work presents the development of a liquid-crystal-tunable-filter-based hyperspectral colposcopy system covering the visible and near-infrared spectral ranges. The proposed system integrates two tunable filters into an existing Optomic OP-C5 clinical colposcope, enabling hyperspectral acquisition from 460 to 1000 nm with 130 spectral bands at 5 nm resolution using a panchromatic camera. Two alternative acquisition strategies were investigated: (i) filtering the light received by the system, or (ii) filtering the light emitted toward the sample. In addition, wavelength-dependent exposure control was studied to compensate for reduced system sensitivity and improve the signal-to-noise ratio in low-efficiency spectral regions. The system was benchmarked against a previous custom hyperspectral implementation based on a commercial camera. The comparative analysis highlights the advantages and limitations of both approaches, demonstrating the proposed system’s suitability for integration into clinical workflows and its potential for early detection of precancerous cervical lesions during routine colposcopic examinations. Full article
(This article belongs to the Special Issue Advanced Sensing Techniques in Biomedical Signal Processing)
Show Figures

Figure 1

21 pages, 6229 KB  
Article
A Spatial–Spectral Decoupled Transformer Framework for Super-Resolution of Low-Earth-Orbit Multispectral Satellite Imagery
by Duhui Yun and Seok-Teak Yun
Appl. Sci. 2026, 16(4), 1674; https://doi.org/10.3390/app16041674 - 7 Feb 2026
Viewed by 357
Abstract
Multispectral (MS) satellite imagery provides rich spectral information for surface and atmospheric interpretation, yet its spatial resolution is often limited by sensor design. In this study, we propose a Transformer-based MS super-resolution framework that uses high-resolution panchromatic (PAN) imagery to supply complementary spatial [...] Read more.
Multispectral (MS) satellite imagery provides rich spectral information for surface and atmospheric interpretation, yet its spatial resolution is often limited by sensor design. In this study, we propose a Transformer-based MS super-resolution framework that uses high-resolution panchromatic (PAN) imagery to supply complementary spatial detail cues for MS reconstruction and explicitly separates spatial enhancement from spectral preservation. In the spatial branch, PAN features are aligned to the MS grid via Pixel-Unshuffle and encoded with shifted-window self-attention to capture long-range spatial dependencies efficiently. In the spectral branch, spectral self-attention treats bands as tokens to learn inter-band correlations and maintain spectral consistency. The two representations are fused through channel concatenation and a 1 × 1 convolutional module, followed by a reconstruction head that upsamples the fused features to generate high-resolution MS outputs. For training, low-resolution MS inputs are synthesized from KOMPSAT-3A MS imagery using a degradation pipeline that combines modulation transfer function-based blur, downsampling, and additive Gaussian noise; the operation order is randomly permuted to emulate diverse acquisition conditions. In addition, Bayesian optimization is employed to explore network configurations through jointly considering the normalized mean absolute error and inference time. Experiments demonstrate that the proposed approach attains 46.23 dB PSNR, 0.9735 SSIM, and 3.12 ERGAS with approximately 167.4 K parameters, achieving a high restoration quality and computational efficiency across diverse degradation settings. Full article
Show Figures

Figure 1

26 pages, 12587 KB  
Article
Shift-Invariant Unsupervised Pansharpening Based on Diffusion Model
by Jialei Xie, Luyan Ji, Jinzhou Ye, Jilei Liu, Qi Feng, Kejian Liu and Yongchao Zhao
Remote Sens. 2026, 18(1), 27; https://doi.org/10.3390/rs18010027 - 22 Dec 2025
Viewed by 500
Abstract
Pansharpening is a crucial topic in remote sensing, and numerous deep learning-based methods have recently been proposed to explore the potential of deep neural networks (DNNs). However, existing approaches are often sensitive to spatial translation errors between high-resolution panchromatic (HRPan) and low-resolution multispectral [...] Read more.
Pansharpening is a crucial topic in remote sensing, and numerous deep learning-based methods have recently been proposed to explore the potential of deep neural networks (DNNs). However, existing approaches are often sensitive to spatial translation errors between high-resolution panchromatic (HRPan) and low-resolution multispectral (LRMS) images, leading to noticeable artifacts in the fused results. To address this issue, we propose an unsupervised pansharpening method that is robust to translation misalignment between HRPan and LRMS inputs. The proposed framework integrates a shift-invariant module to estimate subpixel spatial offsets and a diffusion-based generative model to progressively enhance spatial and spectral details. Moreover, a multi-scale detail injection module is designed to guide the diffusion process with fine-grained structural information. In addition, a carefully formulated loss function is established to preserve the fidelity of fusion results and facilitate the estimation of translation errors. Experiments conducted on the GaoFen-2, GaoFen-1, and WorldView-2 datasets demonstrate that the proposed method achieves superior fusion quality compared with state-of-the-art approaches and effectively suppresses artifacts caused by translation errors. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

23 pages, 4335 KB  
Article
Fourier Fusion Implicit Mamba Network for Remote Sensing Pansharpening
by Ze-Zheng He, Hong-Xia Dou and Yu-Jie Liang
Remote Sens. 2025, 17(22), 3747; https://doi.org/10.3390/rs17223747 - 18 Nov 2025
Viewed by 1102
Abstract
Pansharpening seeks to reconstruct a high-resolution multi-spectral image (HR-MSI) by integrating the fine spatial details from the panchromatic (PAN) image with the spectral richness of the low-resolution multi-spectral image (LR-MSI). In recent years, Implicit Neural Representations (INRs) have demonstrated remarkable potential in various [...] Read more.
Pansharpening seeks to reconstruct a high-resolution multi-spectral image (HR-MSI) by integrating the fine spatial details from the panchromatic (PAN) image with the spectral richness of the low-resolution multi-spectral image (LR-MSI). In recent years, Implicit Neural Representations (INRs) have demonstrated remarkable potential in various visual domains, offering a novel paradigm for pansharpening tasks. However, traditional INRs often suffer from insufficient global awareness and a tendency to capture mainly low-frequency information. To address these challenges, we present the Fourier Fusion Implicit Mamba Network (FFIMamba). The network takes advantage of Mamba’s ability to capture long-range dependencies and integrates a Fourier-based spatial–frequency fusion approach. By mapping features into the Fourier domain, FFIMamba identifies and emphasizes high-frequency details across spatial and frequency dimensions. This process broadens the network’s perception area, enabling more accurate reconstruction of fine structures and textures. Moreover, a spatial–frequency interactive fusion module is introduced to strengthen the information exchange among INR features. Extensive experiments on multiple benchmark datasets demonstrate that FFIMamba achieves superior performance in both visual quality and quantitative metrics. Ablation studies further verify the effectiveness of each component within the proposed framework. Full article
Show Figures

Figure 1

16 pages, 10714 KB  
Article
Ultra-High-Resolution Optical Remote Sensing Satellite Identification of Pine-Wood-Nematode-Infected Trees
by Ziqi Nie, Lin Qin, Peng Xing, Xuelian Meng, Xianjin Meng, Kaitong Qin and Changwei Wang
Plants 2025, 14(22), 3436; https://doi.org/10.3390/plants14223436 - 10 Nov 2025
Cited by 1 | Viewed by 899
Abstract
The pine wood nematode (PWN), one of the globally significant forest diseases, has driven the demand for precise detection methods. Recent advances in satellite remote sensing technology, particularly ultra-high-resolution optical imagery, have opened new avenues for identifying PWN-infected trees. In order to systematically [...] Read more.
The pine wood nematode (PWN), one of the globally significant forest diseases, has driven the demand for precise detection methods. Recent advances in satellite remote sensing technology, particularly ultra-high-resolution optical imagery, have opened new avenues for identifying PWN-infected trees. In order to systematically evaluate the ability of ultra-high-resolution optical remote sensing and the influence of spatial and spectral resolution in detecting PWN-infected trees, this study utilized a U-Net network model to identify PWN-infected trees using three remote sensing datasets of the ultra-high-resolution multispectral imagery from Beijing 3 International Cooperative Remote Sensing Satellite (BJ3N), with a panchromatic band spatial resolution of 0.3 m and six multispectral bands at 1.2 m; the high-resolution multispectral imagery from the Beijing 3A satellite (BJ3A), with a panchromatic band resolution of 0.5 m and four multispectral bands at 2 m; and unmanned aerial vehicle (UAV) imagery with five multispectral bands at 0.07 m. Comparison of the identification results demonstrated that (1) UAV multispectral imagery with 0.07 m spatial resolution achieved the highest accuracy, with an F1 score of 89.1%. Next is the fused ultra-high-resolution BJ3N satellite imagery at 0.3 m, with an F1 score of 88.9%. In contrast, BJ3A imagery with a raw spatial resolution of 2 m performed poorly, with an F1 score of only 28%. These results underscore that finer spatial resolution in remote sensing imagery directly enhances the ability to detect subtle canopy changes indicative of PWN infestation. (2) For UAV, BJ3N, and BJ3A imagery, the identification accuracy for PWN-infected trees showed no significant differences across various band combinations at equivalent spatial resolutions. This indicates that spectral resolution plays a secondary role to spatial resolution in detecting PWN-infected trees using ultra-high-resolution optical imagery. (3) The 0.3 m BJ3N satellite imagery exhibits low false-detection and omission rates, with F1 scores comparable to higher-resolution UAV imagery. This indicates that a spatial resolution of 0.3 m is sufficient for identifying PWN-infected trees and is approaching a point of saturation in a subtropical mountain monsoon climate zone. In conclusion, ultra-high-resolution satellite remote sensing, characterized by frequent data revisit cycles, broad spatial coverage, and balanced spatial-spectral performance, provides an optimal remote sensing data source for identifying PWN-infected trees. As such, it is poised to become a cornerstone of future research and practical applications in detecting and managing PWN infestations globally. Full article
Show Figures

Figure 1

29 pages, 21103 KB  
Article
Dehazing of Panchromatic Remote Sensing Images Based on Histogram Features
by Hao Wang, Yalin Ding, Xiaoqin Zhou, Guoqin Yuan and Chao Sun
Remote Sens. 2025, 17(20), 3479; https://doi.org/10.3390/rs17203479 - 18 Oct 2025
Cited by 1 | Viewed by 829
Abstract
During long-range imaging, the turbid medium in the atmosphere absorbs and scatters light, resulting in reduced contrast, a narrowed dynamic range, and obscure detail information in remote sensing images. The prior-based method has the advantages of good real-time performance and a wide application [...] Read more.
During long-range imaging, the turbid medium in the atmosphere absorbs and scatters light, resulting in reduced contrast, a narrowed dynamic range, and obscure detail information in remote sensing images. The prior-based method has the advantages of good real-time performance and a wide application range. However, few of the existing prior-based methods are applicable to the dehazing of panchromatic images. In this paper, we innovatively propose a prior-based dehazing method for panchromatic remote sensing images through statistical histogram features. First, the hazy image is divided into plain image patches and mixed image patches according to the histogram features. Then, the features of the average occurrence differences between adjacent gray levels (AODAGs) of plain image patches and the features of the average distance to the gray-level gravity center (ADGG) of mixed image patches are, respectively, calculated. Then, the transmission map is obtained according to the statistical relation equation. Then, the atmospheric light of each image patch is calculated separately based on the maximum gray level of the image patch using the threshold segmentation method. Finally, the dehazed image is obtained based on the physical model. Extensive experiments in synthetic and real-world panchromatic hazy remote sensing images show that the proposed algorithm outperforms state-of-the-art dehazing methods in both efficiency and dehazing effect. Full article
Show Figures

Figure 1

33 pages, 20327 KB  
Article
Automated Detection of Beaver-Influenced Floodplain Inundations in Multi-Temporal Aerial Imagery Using Deep Learning Algorithms
by Evan Zocco, Chandi Witharana, Isaac M. Ortega and William Ouimet
ISPRS Int. J. Geo-Inf. 2025, 14(10), 383; https://doi.org/10.3390/ijgi14100383 - 30 Sep 2025
Viewed by 955
Abstract
Remote sensing provides a viable alternative for understanding landscape modifications attributed to beaver activity. The central objective of this study is to integrate multi-source remote sensing observations in tandem with a deep learning (DL) (convolutional neural net or transformer) model to automatically map [...] Read more.
Remote sensing provides a viable alternative for understanding landscape modifications attributed to beaver activity. The central objective of this study is to integrate multi-source remote sensing observations in tandem with a deep learning (DL) (convolutional neural net or transformer) model to automatically map beaver-influenced floodplain inundations (BIFI) over large geographical extents. We trained, validated, and tested eleven different model configurations in three architectures using five ResNet and five B-Finetuned encoders. The training dataset consisted of >25,000 manually annotated aerial image tiles of BIFIs in Connecticut. The YOLOv8 architecture outperformed competing configurations and achieved an F1 score of 80.59% and pixel-based map accuracy of 98.95%. SegFormer and U-Net++’s highest-performing models had F1 scores of 68.98% and 78.86%, respectively. The YOLOv8l-seg model was deployed at a statewide scale based on 1 m resolution multi-temporal aerial imagery acquired from 1990 to 2019 under leaf-on and leaf-off conditions. Our results suggest a variety of inferences when comparing leaf-on and leaf-off conditions of the same year. The model exhibits limitations in identifying BIFIs in panchromatic imagery in occluded environments. Study findings demonstrate the potential of harnessing historical and modern aerial image datasets with state-of-the-art DL models to increase our understanding of beaver activity across space and time. Full article
Show Figures

Figure 1

27 pages, 7020 KB  
Article
RPC Correction Coefficient Extrapolation for KOMPSAT-3A Imagery in Inaccessible Regions
by Namhoon Kim
Remote Sens. 2025, 17(19), 3332; https://doi.org/10.3390/rs17193332 - 29 Sep 2025
Viewed by 1102
Abstract
High-resolution pushbroom satellites routinely acquire multi-tenskilometer-scale strips whose vendors’ rational polynomial coefficients (RPCs) exhibit systematic, direction-dependent biases that accumulate downstream when ground control is sparse. This study presents a physically interpretable stripwise extrapolation framework that predicts along- and across-track RPC correlation coefficients for [...] Read more.
High-resolution pushbroom satellites routinely acquire multi-tenskilometer-scale strips whose vendors’ rational polynomial coefficients (RPCs) exhibit systematic, direction-dependent biases that accumulate downstream when ground control is sparse. This study presents a physically interpretable stripwise extrapolation framework that predicts along- and across-track RPC correlation coefficients for inaccessible segments from an upstream calibration subset. Terrain-independent RPCs were regenerated and residual image-space errors were modeled with weighted least squares using elapsed time, off-nadir evolution, and morphometric descriptors of the target terrain. Gaussian kernel weights favor calibration scenes with a Jarque–Bera-indexed relief similar to the target. When applied to three KOMPSAT-3A panchromatic strips, the approach preserves native scene geometry while transporting calibrated coefficients downstream, reducing positional errors in two strips to <2.8 pixels (~2.0 m at 0.710 m Ground Sample Distance, GSD). The first strip with a stronger attitude drift retains 4.589 pixel along-track errors, indicating the need for wider predictor coverage under aggressive maneuvers. The results clarify the directional error structure with a near-constant across-track bias and low-frequency along-track drift and show that a compact predictor set can stabilize extrapolation without full-block adjustment or dense tie networks. This provides a GCP-efficient alternative to full-block adjustment and enables accurate georeferencing in controlled environments. Full article
Show Figures

Figure 1

28 pages, 14783 KB  
Article
HSSTN: A Hybrid Spectral–Structural Transformer Network for High-Fidelity Pansharpening
by Weijie Kang, Yuan Feng, Yao Ding, Hongbo Xiang, Xiaobo Liu and Yaoming Cai
Remote Sens. 2025, 17(19), 3271; https://doi.org/10.3390/rs17193271 - 23 Sep 2025
Viewed by 1309
Abstract
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS [...] Read more.
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS and PAN data. Consequently, spectral distortion and spatial degradation often occur, limiting high-precision downstream applications. To address these issues, this work proposes a Hybrid Spectral–Structural Transformer Network (HSSTN) that enhances multi-level collaboration through comprehensive modelling of spectral–structural feature complementarity. Specifically, the HSSTN implements a three-tier fusion framework. First, an asymmetric dual-stream feature extractor employs a residual block with channel attention (RBCA) in the MS branch to strengthen spectral representation, while a Transformer architecture in the PAN branch extracts high-frequency spatial details, thereby reducing modality discrepancy at the input stage. Subsequently, a target-driven hierarchical fusion network utilises progressive crossmodal attention across scales, ranging from local textures to multi-scale structures, to enable efficient spectral–structural aggregation. Finally, a novel collaborative optimisation loss function preserves spectral integrity while enhancing structural details. Comprehensive experiments conducted on QuickBird, GaoFen-2, and WorldView-3 datasets demonstrate that HSSTN outperforms existing methods in both quantitative metrics and visual quality. Consequently, the resulting images exhibit sharper details and fewer spectral artefacts, showcasing significant advantages in high-fidelity remote sensing image fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

22 pages, 6968 KB  
Article
Signatures of Breaking Waves in a Coastal Polynya Covered with Frazil Ice: A High-Resolution Satellite Image Case Study of Terra Nova Bay Polynya
by Katarzyna Bradtke, Wojciech Brodziński and Agnieszka Herman
Remote Sens. 2025, 17(18), 3198; https://doi.org/10.3390/rs17183198 - 16 Sep 2025
Cited by 2 | Viewed by 1399
Abstract
The study focuses on the detection of breaking wave crests in the highly dynamic waters of an Antarctic coastal polynya using high-resolution panchromatic satellite imagery. Accurate assessment of whitecap coverage is crucial for improving our understanding of the interactions between wave generation, air–sea [...] Read more.
The study focuses on the detection of breaking wave crests in the highly dynamic waters of an Antarctic coastal polynya using high-resolution panchromatic satellite imagery. Accurate assessment of whitecap coverage is crucial for improving our understanding of the interactions between wave generation, air–sea heat exchange, and sea ice formation in these complex environments. As open-ocean whitecap detection methods are inadequate in coastal polynyas partially covered with frazil ice, we discuss an approach that exploits specific lighting conditions: the alignment of sunlight with the dominant wind direction and low solar elevation. Under such conditions, steep breaking waves cast pronounced shadows, which are used as the primary indicator of wave crests, particularly in frazil streak zones. The algorithm is optimized to exploit these conditions and minimize false positives along frazil streak boundaries. We applied the algorithm to a WorldView-2 image covering different parts of Terra Nova Bay Polynya (Ross Sea), a dynamic polar coastal zone. This case study demonstrates that the spatial distribution of detected breaking waves is consistent with ice conditions and wind forcing patterns, while also revealing deviations that point to complex wind–wave–ice interactions. Although quantitative validation of satellite-derived whitecaps coverage was not possible due to the lack of in situ data, the method performs reliably under a range of conditions. Limitations of the proposed approach are pointed out and discussed. Finally, the study highlights the risk of misinterpretation of lower-resolution reflectance data in areas where whitecaps and sea ice coexist at subpixel scales. Full article
Show Figures

Figure 1

22 pages, 3882 KB  
Article
Combining Satellite Image Standardization and Self-Supervised Learning to Improve Building Segmentation Accuracy
by Haoran Zhang and Bunkei Matsushita
Remote Sens. 2025, 17(18), 3182; https://doi.org/10.3390/rs17183182 - 14 Sep 2025
Cited by 1 | Viewed by 1281
Abstract
Many research fields, such as urban planning, urban climate and environmental assessment, require information on the distribution of buildings. In this study, we used U-Net to segment buildings from WorldView-3 imagery. To improve the accuracy of building segmentation, we undertook two endeavors. First, [...] Read more.
Many research fields, such as urban planning, urban climate and environmental assessment, require information on the distribution of buildings. In this study, we used U-Net to segment buildings from WorldView-3 imagery. To improve the accuracy of building segmentation, we undertook two endeavors. First, we investigated the optimal order of atmospheric correction (AC) and panchromatic sharpening (pan-sharpening) and found that performing AC before pan-sharpening results in higher building segmentation accuracy than after pan-sharpening, increasing the average IoU by 9.4%. Second, we developed a new multi-task self-supervised learning (SSL) network to pre-train VGG19 backbone using 21 unlabeled WorldView images. The new multi-task SSL network includes two pretext tasks specifically designed to take into account the characteristics of buildings in satellite imagery (size, distribution pattern, multispectral, etc.). Performance evaluation shows that U-Net combined with an SSL pre-trained VGG19 backbone improves building segmentation accuracy by 15.3% compared to U-Net combined with a VGG19 backbone trained from scratch. Comparative analysis also shows that the new multi-task SSL network outperforms other existing SSL methods, improving building segmentation accuracy by 3.5–13.7%. Moreover, the proposed method significantly saves computational costs and can effectively work on a personal computer. Full article
Show Figures

Figure 1

23 pages, 6105 KB  
Article
YUV Color Model-Based Adaptive Pansharpening with Lanczos Interpolation and Spectral Weights
by Shavkat Fazilov, Ozod Yusupov, Erali Eshonqulov, Khabiba Abdieva and Ziyodullo Malikov
Mathematics 2025, 13(17), 2868; https://doi.org/10.3390/math13172868 - 5 Sep 2025
Cited by 1 | Viewed by 1020
Abstract
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, [...] Read more.
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, a key challenge continues to be the maintenance of both spatial details and spectral accuracy in the combined image. To tackle this challenge, we introduce a new approach that enhances the component substitution-based Adaptive IHS method by integrating the YUV color model along with weighting coefficients influenced by the multispectral data. In our proposed approach, the conventional IHS color model is substituted with the YUV model to enhance spectral consistency. Additionally, Lanczos interpolation is used to upscale the MS image to match the spatial resolution of the PAN image. Each channel of the MS image is fused using adaptive weights derived from the influence of multispectral data, leading to the final pansharpened image. Based on the findings from experiments conducted on the PairMax and PanCollection datasets, our proposed method exhibited superior spectral and spatial performance when compared to several existing pansharpening techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

Back to TopTop