Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = hyperspectral spatial frequency domain imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1505 KB  
Article
WaveletHSI: Direct HSI Classification from Compressed Wavelet Coefficients via Sub-Band Feature Extraction and Fusion
by Xin Li and Baile Sun
J. Imaging 2025, 11(12), 441; https://doi.org/10.3390/jimaging11120441 - 10 Dec 2025
Viewed by 295
Abstract
A major computational bottleneck in classifying large-scale hyperspectral images (HSI) is the mandatory data decompression prior to processing. Compressed-domain computing offers a solution by enabling deep learning on partially compressed data. However, existing compressed-domain methods are predominantly tailored for the Discrete Cosine Transform [...] Read more.
A major computational bottleneck in classifying large-scale hyperspectral images (HSI) is the mandatory data decompression prior to processing. Compressed-domain computing offers a solution by enabling deep learning on partially compressed data. However, existing compressed-domain methods are predominantly tailored for the Discrete Cosine Transform (DCT) used in natural images, while HSIs are typically compressed using the Discrete Wavelet Transform (DWT). The fundamental structural mismatch between the block-based DCT and the hierarchical DWT sub-bands presents two core challenges: how to extract features from multiple wavelet sub-bands, and how to fuse these features effectively? To address these issues, we propose a novel framework that extracts and fuses features from different DWT sub-bands directly. We design a multi-branch feature extractor with sub-band feature alignment loss that processes functionally different sub-bands in parallel, preserving the independence of each frequency feature. We then employ a sub-band cross-attention mechanism that inverts the typical attention paradigm by using the sparse, high-frequency detail sub-bands as queries to adaptively select and enhance salient features from the dense, information-rich low-frequency sub-bands. This enables a targeted fusion of global context and fine-grained structural information without data reconstruction. Experiments on three benchmark datasets demonstrate that our method achieves classification accuracy comparable to state-of-the-art spatial-domain approaches while eliminating at least 56% of the decompression overhead. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

20 pages, 6167 KB  
Article
Spatial/Spectral-Frequency Adaptive Network for Hyperspectral Image Reconstruction in CASSI
by Hejian Liu, Yan Yuan, Xiaorui Yin and Lijuan Su
Remote Sens. 2025, 17(19), 3382; https://doi.org/10.3390/rs17193382 - 8 Oct 2025
Viewed by 942
Abstract
Coded-Aperture Snapshot Spectral Imaging (CASSI) systems acquire 3D spatial–spectral information on dynamic targets by converting 3D hyperspectral images (HSIs) into 2D compressed measurements. Various end-to-end networks have been proposed for HSI reconstruction from these measurements. However, these methods have not explored the frequency-domain [...] Read more.
Coded-Aperture Snapshot Spectral Imaging (CASSI) systems acquire 3D spatial–spectral information on dynamic targets by converting 3D hyperspectral images (HSIs) into 2D compressed measurements. Various end-to-end networks have been proposed for HSI reconstruction from these measurements. However, these methods have not explored the frequency-domain information of HSIs. This research presents the spatial/spectral-frequency adaptive network (SSFAN) for CASSI image reconstruction. A frequency-division transformation (FDT) decomposes HSIs into distinct Fourier frequency components, enabling multiscale feature extraction in the frequency domain. The proposed dual-branch architecture consists of a spatial–spectral module (SSM) to preserve spatial–spectral consistency and a frequency division module (FDM) to model inter-frequency dependencies. Channel compression/expansion modules are integrated into the FDM to balance computational efficiency and reconstruction quality. Frequency-division loss supervises feature learning across divided frequency channels. Ablation experiments validate the contributions of each network module. Furthermore, comparison experiments on synthetic and real CASSI datasets demonstrate that SSFAN outperforms state-of-the-art end-to-end methods in reconstruction performance. Full article
Show Figures

Figure 1

23 pages, 3623 KB  
Article
WSC-Net: A Wavelet-Enhanced Swin Transformer with Cross-Domain Attention for Hyperspectral Image Classification
by Zhen Yang, Huihui Li, Feiming Wei, Jin Ma and Tao Zhang
Remote Sens. 2025, 17(18), 3216; https://doi.org/10.3390/rs17183216 - 17 Sep 2025
Cited by 1 | Viewed by 942
Abstract
This paper introduces the Wavelet-Enhanced Swin Transformer Network (WSC-Net), a novel dual-branch architecture that resolves the inherent tradeoff between global spatial contextual and fine-grained spectral details in hyperspectral image (HSI) classification. While transformer-based models excel at capturing long-range dependencies, their patch-based nature often [...] Read more.
This paper introduces the Wavelet-Enhanced Swin Transformer Network (WSC-Net), a novel dual-branch architecture that resolves the inherent tradeoff between global spatial contextual and fine-grained spectral details in hyperspectral image (HSI) classification. While transformer-based models excel at capturing long-range dependencies, their patch-based nature often overlooks intra-patch high-frequency details, hindering the discrimination of spectrally similar classes. Our framework synergistically couples a two-stage Swin transformer with a parallel Wavelet Transform Module (WTM) for local frequency information capture. To address the semantic gap between spatial and frequency domains, we propose the Cross-Domain Attention Fusion (CDAF) module—a bi-directional attention mechanism that facilitates intelligent feature exchange between the two streams. CDAF explicitly models cross-domain dependencies, amplifies complementary features, and suppresses noise through attention-guided integration. Extensive experiments on four benchmark datasets demonstrate that WSC-Net consistently outperforms state-of-the-art methods, confirming its effectiveness in balancing global contextual modeling with local detail preservation. Full article
Show Figures

Figure 1

16 pages, 1046 KB  
Review
How Can Technology Improve Burn Wound Care: A Review of Wound Imaging Technologies and Their Application in Burns—UK Experience
by Nawras Farhan, Zakariya Hassan, Mohammad Al Mahdi Ali, Zaid Alqalaf, Roeya E. Rasul and Steven Jeffery
Diagnostics 2025, 15(17), 2277; https://doi.org/10.3390/diagnostics15172277 - 8 Sep 2025
Viewed by 1697
Abstract
Burn wounds are complex injuries that require timely and accurate assessment to guide treatment decisions and improve healing outcomes. Traditional clinical evaluations are largely subjective, often leading to delays in intervention and increased risk of complications. Imaging technologies have emerged as valuable tools [...] Read more.
Burn wounds are complex injuries that require timely and accurate assessment to guide treatment decisions and improve healing outcomes. Traditional clinical evaluations are largely subjective, often leading to delays in intervention and increased risk of complications. Imaging technologies have emerged as valuable tools that enhance diagnostic accuracy and enable objective, real-time assessment of wound characteristics. This review aims to evaluate the range of imaging modalities currently applied in burn wound care and assess their clinical relevance, diagnostic accuracy, and cost-effectiveness. It explores how these technologies address key challenges in wound evaluation, particularly related to burn depth, perfusion status, bacterial burden, and healing potential. A comprehensive narrative review was conducted, drawing on peer-reviewed journal articles, NICE innovation briefings, and clinical trial data. The databases searched included PubMed, Ovid MEDLINE, and the Cochrane Library. Imaging modalities examined include Laser Doppler Imaging (LDI), Fluorescence Imaging (FI), Near-Infrared Spectroscopy (NIR), Hyperspectral Imaging, Spatial Frequency Domain Imaging (SFDI), and digital wound measurement systems. The clinical application and integration of these modalities in UK clinical practice were also explored. Each modality demonstrated unique clinical benefits. LDI was effective in assessing burn depth and perfusion, improving surgical planning, and reducing unnecessary procedures. FI, particularly the MolecuLight i:X device (MolecuLight Inc., Toronto, ON, Canada), accurately identified bacterial burden and guided targeted interventions. NIR and Hyperspectral Imaging provided insights into tissue oxygenation and viability, while SFDI enabled early detection of infection and vascular compromise. Digital measurement tools offered accurate, non-contact assessment and supported telemedicine use. NICE recognized both LDI and MolecuLight as valuable tools with the potential to improve outcomes and reduce healthcare costs. Imaging technologies significantly improve the precision and efficiency of burn wound care. Their ability to offer objective, non-invasive diagnostics enhances clinical decision-making. Future research should focus on broader validation and integration into clinical guidelines to ensure widespread adoption. Full article
(This article belongs to the Special Issue Diagnostics in the Emergency and Critical Care Medicine)
Show Figures

Figure 1

20 pages, 5077 KB  
Article
Hybrid-Domain Synergistic Transformer for Hyperspectral Image Denoising
by Haoyue Li and Di Wu
Appl. Sci. 2025, 15(17), 9735; https://doi.org/10.3390/app15179735 - 4 Sep 2025
Viewed by 1091
Abstract
Hyperspectral image (HSI) denoising is challenged by complex spatial-spectral noise coupling. Existing deep learning methods, primarily designed for RGB images, fail to address HSI-specific noise distributions and spectral correlations. This paper proposes a Hybrid-Domain Synergistic Transformer (HDST) integrating frequency-domain enhancement and multiscale modeling. [...] Read more.
Hyperspectral image (HSI) denoising is challenged by complex spatial-spectral noise coupling. Existing deep learning methods, primarily designed for RGB images, fail to address HSI-specific noise distributions and spectral correlations. This paper proposes a Hybrid-Domain Synergistic Transformer (HDST) integrating frequency-domain enhancement and multiscale modeling. Key contributions include (1) a Fourier-based preprocessing module decoupling spectral noise; (2) a dynamic cross-domain attention mechanism adaptively fusing spatial-frequency features; and (3) a hierarchical architecture combining global noise modeling and detail recovery. Experiments on realistic and synthetic datasets show HDST outperforms state-of-the-art methods in PSNR, with fewer parameters. Visual results confirm effective noise suppression without spectral distortion. The framework provides a robust solution for HSI denoising, demonstrating potential for high-dimensional visual data processing. Full article
Show Figures

Figure 1

28 pages, 5450 KB  
Article
DFAST: A Differential-Frequency Attention-Based Band Selection Transformer for Hyperspectral Image Classification
by Deren Fu, Yiliang Zeng and Jiahong Zhao
Remote Sens. 2025, 17(14), 2488; https://doi.org/10.3390/rs17142488 - 17 Jul 2025
Cited by 2 | Viewed by 876
Abstract
Hyperspectral image (HSI) classification faces challenges such as high dimensionality, spectral redundancy, and difficulty in modeling the coupling between spectral and spatial features. Existing methods fail to fully exploit first-order derivatives and frequency domain information, which limits classification performance. To address these issues, [...] Read more.
Hyperspectral image (HSI) classification faces challenges such as high dimensionality, spectral redundancy, and difficulty in modeling the coupling between spectral and spatial features. Existing methods fail to fully exploit first-order derivatives and frequency domain information, which limits classification performance. To address these issues, this paper proposes a Differential-Frequency Attention-based Band Selection Transformer (DFAST) for HSI classification. Specifically, a Differential-Frequency Attention-based Band Selection Embedding Module (DFASEmbeddings) is designed to extract original spectral, first-order derivative, and frequency domain features via a multi-branch structure. Learnable band selection attention weights are introduced to adaptively select important bands, capture critical spectral information, and significantly reduce redundancy. A 3D convolution and a spectral–spatial attention mechanism are applied to perform fine-grained modeling of spectral and spatial features, further enhancing the global dependency capture of spectral–spatial features. The embedded features are then input into a cascaded Transformer encoder (SCEncoder) for deep modeling of spectral–spatial coupling characteristics to achieve classification. Additionally, learnable attention weights for band selection are outputted for dimensionality reduction. Experiments on several public hyperspectral datasets demonstrate that the proposed method outperforms existing CNN and Transformer-based approaches in classification performance. Full article
Show Figures

Figure 1

24 pages, 3937 KB  
Article
HyperTransXNet: Learning Both Global and Local Dynamics with a Dual Dynamic Token Mixer for Hyperspectral Image Classification
by Xin Dai, Zexi Li, Lin Li, Shuihua Xue, Xiaohui Huang and Xiaofei Yang
Remote Sens. 2025, 17(14), 2361; https://doi.org/10.3390/rs17142361 - 9 Jul 2025
Cited by 1 | Viewed by 1010
Abstract
Recent advances in hyperspectral image (HSI) classification have demonstrated the effectiveness of hybrid architectures that integrate convolutional neural networks (CNNs) and Transformers, leveraging CNNs for local feature extraction and Transformers for global dependency modeling. However, existing fusion approaches face three critical challenges: (1) [...] Read more.
Recent advances in hyperspectral image (HSI) classification have demonstrated the effectiveness of hybrid architectures that integrate convolutional neural networks (CNNs) and Transformers, leveraging CNNs for local feature extraction and Transformers for global dependency modeling. However, existing fusion approaches face three critical challenges: (1) insufficient synergy between spectral and spatial feature learning due to rigid coupling mechanisms; (2) high computational complexity resulting from redundant attention calculations; and (3) limited adaptability to spectral redundancy and noise in small-sample scenarios. To address these limitations, we propose HyperTransXNet, a novel CNN-Transformer hybrid architecture that incorporates adaptive spectral-spatial fusion. Specifically, the proposed HyperTransXNet comprises three key modules: (1) a Hybrid Spatial-Spectral Module (HSSM) that captures the refined local spectral-spatial features and models global spectral correlations by combining depth-wise dynamic convolution with frequency-domain attention; (2) a Mixture-of-Experts Routing (MoE-R) module that adaptively fuses multi-scale features by dynamically selecting optimal experts via Top-K sparse weights; and (3) a Spatial-Spectral Tokens Enhancer (SSTE) module that ensures causality-preserving interactions between spectral bands and spatial contexts. Extensive experiments on the Indian Pines, Houston 2013, and WHU-Hi-LongKou datasets demonstrate the superiority of HyperTransXNet. Full article
(This article belongs to the Special Issue AI-Driven Hyperspectral Remote Sensing of Atmosphere and Land)
Show Figures

Figure 1

28 pages, 4356 KB  
Article
Hyperspectral Image Classification Based on Fractional Fourier Transform
by Jing Liu, Lina Lian, Yuanyuan Li and Yi Liu
Remote Sens. 2025, 17(12), 2065; https://doi.org/10.3390/rs17122065 - 15 Jun 2025
Cited by 1 | Viewed by 1642
Abstract
To effectively utilize the rich spectral information of hyperspectral remote sensing images (HRSIs), the fractional Fourier transform (FRFT) feature of HRSIs is proposed to reflect the time-domain and frequency-domain characteristics of a spectral pixel simultaneously, and an FRFT order selection criterion is also [...] Read more.
To effectively utilize the rich spectral information of hyperspectral remote sensing images (HRSIs), the fractional Fourier transform (FRFT) feature of HRSIs is proposed to reflect the time-domain and frequency-domain characteristics of a spectral pixel simultaneously, and an FRFT order selection criterion is also proposed based on maximizing separability. Firstly, FRFT is applied to the spectral pixels, and the amplitude spectrum is taken as the FRFT feature of HRSIs. The FRFT feature is mixed with the pixel spectral to form the presented spectral and fractional Fourier transform mixed feature (SF2MF), which contains time–frequency mixing information and spectral information of pixels. K-nearest neighbor, logistic regression, and random forest classifiers are used to verify the superiority of the proposed feature. A 1-dimensional convolutional neural network (1D-CNN) and a two-branch CNN network (Two-CNNSF2MF-Spa) are designed to extract the depth SF2MF feature and the SF2MF-spatial joint feature, respectively. Moreover, to compensate for the defect that CNN cannot effectively capture the long-range features of spectral pixels, a long short-term memory (LSTM) network is introduced to be combined with CNN to form a two-branch network C-CLSTMSF2MF for extracting deeper and more efficient fusion features. A 3D-CNNSF2MF model is designed, which firstly performs the principal component analysis on the spa-SF2MF cube containing spatial information and then feeds it into the 3-dimensional convolutional neural network 3D-CNNSF2MF to extract the SF2MF-spatial joint feature effectively. The experimental results of three real HRSIs show that the presented mixed feature SF2MF can effectively improve classification accuracy. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

23 pages, 5811 KB  
Article
Multi-Attitude Hybrid Network for Remote Sensing Hyperspectral Images Super-Resolution
by Chi Chen, Yunhan Sun, Xueyan Hu, Ning Zhang, Hao Feng, Zheng Li and Yongcheng Wang
Remote Sens. 2025, 17(11), 1947; https://doi.org/10.3390/rs17111947 - 4 Jun 2025
Cited by 2 | Viewed by 1359
Abstract
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the [...] Read more.
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the internal information, which in turn limits the precise reconstruction of detailed texture and spectral features. Therefore, we propose the multi-attitude hybrid network (MAHN) for extracting and characterizing information from multiple feature spaces. On the one hand, we construct the spectral hypergraph cross-attention module (SHCAM) and the spatial hypergraph self-attention module (SHSAM) based on the high and low-frequency features in the spectral and the spatial domains, respectively, which are used to capture the main structure and detail changes within the image. On the other hand, high-level semantic information in mixed pixels is parsed by spectral mixture analysis, and semantic hypergraph 3D module (SH3M) are constructed based on the abundance of each category to enhance the propagation and reconstruction of semantic information. Furthermore, to mitigate the domain discrepancies among features, we introduce a sensitive bands attention mechanism (SBAM) to enhance the cross-guidance and fusion of multi-domain features. Extensive experiments demonstrate that our method achieves optimal reconstruction results compared to other state-of-the-art algorithms while effectively reducing the computational complexity. Full article
Show Figures

Figure 1

23 pages, 5262 KB  
Article
FSFF-Net: A Frequency-Domain Feature and Spatial-Domain Feature Fusion Network for Hyperspectral Image Classification
by Xinyu Pan, Chen Zang, Wanxuan Lu, Guiyuan Jiang and Qian Sun
Electronics 2025, 14(11), 2234; https://doi.org/10.3390/electronics14112234 - 30 May 2025
Cited by 4 | Viewed by 1401
Abstract
In hyperspectral image (HSI) classification, each pixel is assigned to a specific land cover type, which is critical for applications in environmental monitoring, agriculture, and urban planning. Convolutional neural network (CNN) and Transformers have become widely adopted due to their exceptional feature extraction [...] Read more.
In hyperspectral image (HSI) classification, each pixel is assigned to a specific land cover type, which is critical for applications in environmental monitoring, agriculture, and urban planning. Convolutional neural network (CNN) and Transformers have become widely adopted due to their exceptional feature extraction capabilities. However, the local receptive field of CNN limits their ability to capture global context, while Transformers, though effective in modeling long-range dependencies, introduce computational overhead. To address these challenges, we propose a frequency-domain and spatial-domain feature fusion network (FSFF-Net) for HSI classification, which reduces computational complexity while capturing global features. The FSFF-Net consists of a frequency-domain transformer (FDformer) and a deepwise convolution-based parallel encoder structure. The FDformer replaces the self-attention mechanism in traditional Visual Transformers with a three-step process: two-dimensional discrete Fourier transform (2D-DFT), adaptive filter, and two-dimensional inverse Fourier transform (2D-IDFT). 2D DFT and 2D-IDFT convert images between the spatial and frequency domains. Adaptive filter can adaptively retain important frequency components, remove redundant components, and assign weights to different frequency components. This module not only can reduce computational overhead by decreasing the number of parameters, but also mitigates the limitations of CNN by capturing complementary frequency-domain features, which enhance the spatial-domain features for improved classification. In parallel, deepwise convolution is employed to capture spatial-domain features. The network then integrates the frequency-domain features from FDformer and the spatial-domain features from deepwise convolution through a feature fusion module. The experimental results demonstrate that our method is efficient and robust for HSIs classification, achieving overall accuracies of 98.03%, 99.57%, 97.05%, and 98.40% on Indian Pines, Pavia University, Salinas, and Houston 2013 datasets, respectively. Full article
(This article belongs to the Special Issue Innovation and Technology of Computer Vision)
Show Figures

Graphical abstract

24 pages, 6314 KB  
Article
CDFAN: Cross-Domain Fusion Attention Network for Pansharpening
by Jinting Ding, Honghui Xu and Shengjun Zhou
Entropy 2025, 27(6), 567; https://doi.org/10.3390/e27060567 - 27 May 2025
Viewed by 1217
Abstract
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs [...] Read more.
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs while minimizing spectral distortion and redundancy in the fused output. However, traditional spatial-domain methods often fail to preserve high-frequency texture details, leading to entropy degradation in the resulting images. On the other hand, frequency-based approaches struggle to effectively integrate spatial and spectral cues, often neglecting the underlying information content distributions across domains. To address these shortcomings, we introduce a novel architecture, termed the Cross-Domain Fusion Attention Network (CDFAN), specifically designed for the pansharpening task. CDFAN is composed of two core modules: the Multi-Domain Interactive Attention (MDIA) module and the Spatial Multi-Scale Enhancement (SMCE) module. The MDIA module utilizes discrete wavelet transform (DWT) to decompose the PAN image into frequency sub-bands, which are then employed to construct attention mechanisms across both wavelet and spatial domains. Specifically, wavelet-domain features are used to formulate query vectors, while key features are derived from the spatial domain, allowing attention weights to be computed over multi-domain representations. This design facilitates more effective fusion of spectral and spatial cues, contributing to superior reconstruction of high-resolution multispectral (HRMS) images. Complementing this, the SMCE module integrates multi-scale convolutional pathways to reinforce spatial detail extraction at varying receptive fields. Additionally, an Expert Feature Compensator is introduced to adaptively balance contributions from different scales, thereby optimizing the trade-off between local detail preservation and global contextual understanding. Comprehensive experiments conducted on standard benchmark datasets demonstrate that CDFAN achieves notable improvements over existing state-of-the-art pansharpening methods, delivering enhanced spectral–spatial fidelity and producing images with higher perceptual quality. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

20 pages, 3965 KB  
Article
Hyperspectral Spatial Frequency Domain Imaging Technique for Soluble Solids Content and Firmness Assessment of Pears
by Yang Yang, Xiaping Fu and Ying Zhou
Horticulturae 2024, 10(8), 853; https://doi.org/10.3390/horticulturae10080853 - 12 Aug 2024
Cited by 3 | Viewed by 1424
Abstract
High Spectral Spatial Frequency Domain Imaging (HSFDI) combines high spectral imaging and spatial frequency domain imaging techniques, offering advantages such as wide spectral range, non-contact, and differentiated imaging depth, making it well-suited for measuring the optical properties of agricultural products. The diffuse reflectance [...] Read more.
High Spectral Spatial Frequency Domain Imaging (HSFDI) combines high spectral imaging and spatial frequency domain imaging techniques, offering advantages such as wide spectral range, non-contact, and differentiated imaging depth, making it well-suited for measuring the optical properties of agricultural products. The diffuse reflectance spectra of the samples at spatial frequencies of 0 mm-1 (Rd0) and 0.2 mm-1 (Rd0) were obtained using the three-phase demodulation algorithm. The pixel-by-pixel inversion was performed to obtain the absorption coefficient (μa) spectra and the reduced scattering coefficient (μs) spectra of the pears. For predicting the SSC and firmness of the pears, these optical properties and their specific combinations were used as inputs for partial least squares regression (PLSR) modeling by combining them with the wavelength selection algorithm of competitive adaptive reweighting sampling (CARS). The results showed that μa had a stronger correlation with SSC, whereas μs exhibited a stronger correlation with firmness. Taking the plane diffuse reflectance Rd0 as the comparison object, the prediction results of SSC based on both μa and the combination of diffuse reflectance at two spatial frequencies (Rd) were superior (the best Rp2 of 0.90 and RMSEP of 0.41%). Similarly, in the prediction of firmness, the results of μs, μa×μs, and Rd1 were better than that of Rd0 (the best Rp2 of 0.80 and RMSEP of 3.25%). The findings of this research indicate that the optical properties represented by HSFDI technology and their combinations can accurately predict the internal quality of pears, providing a novel technical approach for the non-destructive internal quality evaluation of agricultural products. Full article
(This article belongs to the Section Postharvest Biology, Quality, Safety, and Technology)
Show Figures

Figure 1

21 pages, 2495 KB  
Article
Hyperspectral Image Classification Using Multi-Scale Lightweight Transformer
by Quan Gu, Hongkang Luan, Kaixuan Huang and Yubao Sun
Electronics 2024, 13(5), 949; https://doi.org/10.3390/electronics13050949 - 29 Feb 2024
Cited by 6 | Viewed by 2865
Abstract
The distinctive feature of hyperspectral images (HSIs) is their large number of spectral bands, which allows us to identify categories of ground objects by capturing discrepancies in spectral information. Convolutional neural networks (CNN) with attention modules effectively improve the classification accuracy of HSI. [...] Read more.
The distinctive feature of hyperspectral images (HSIs) is their large number of spectral bands, which allows us to identify categories of ground objects by capturing discrepancies in spectral information. Convolutional neural networks (CNN) with attention modules effectively improve the classification accuracy of HSI. However, CNNs are not successful in capturing long-range spectral–spatial dependence. In recent years, Vision Transformer (VIT) has received widespread attention due to its excellent performance in acquiring long-range features. However, it requires calculating the pairwise correlation between token embeddings and has the complexity of the square of the number of tokens, which leads to an increase in the computational complexity of the network. In order to cope with this issue, this paper proposes a multi-scale spectral–spatial attention network with frequency-domain lightweight Transformer (MSA-LWFormer) for HSI classification. This method synergistically integrates CNN, attention mechanisms, and Transformer into the spectral–spatial feature extraction module and frequency-domain fused classification module. Specifically, the spectral–spatial feature extraction module employs a multi-scale 2D-CNN with multi-scale spectral attention (MS-SA) to extract the shallow spectral–spatial features and capture the long-range spectral dependence. In addition, The frequency-domain fused classification module designs a frequency-domain lightweight Transformer that employs the Fast Fourier Transform (FFT) to convert features from the spatial domain to the frequency domain, effectively extracting global information and significantly reducing the time complexity of the network. Experiments on three classic hyperspectral datasets show that MSA-LWFormer has excellent performance. Full article
(This article belongs to the Topic Hyperspectral Imaging and Signal Processing)
Show Figures

Figure 1

18 pages, 3491 KB  
Review
A Comparison of Spectroscopy and Imaging Techniques Utilizing Spectrally Resolved Diffusely Reflected Light for Intraoperative Margin Assessment in Breast-Conserving Surgery: A Systematic Review and Meta-Analysis
by Dhurka Shanthakumar, Maria Leiloglou, Colm Kelliher, Ara Darzi, Daniel S. Elson and Daniel R. Leff
Cancers 2023, 15(11), 2884; https://doi.org/10.3390/cancers15112884 - 23 May 2023
Cited by 7 | Viewed by 2687
Abstract
Up to 19% of patients require re-excision surgery due to positive margins in breast-conserving surgery (BCS). Intraoperative margin assessment tools (IMAs) that incorporate tissue optical measurements could help reduce re-excision rates. This review focuses on methods that use and assess spectrally resolved diffusely [...] Read more.
Up to 19% of patients require re-excision surgery due to positive margins in breast-conserving surgery (BCS). Intraoperative margin assessment tools (IMAs) that incorporate tissue optical measurements could help reduce re-excision rates. This review focuses on methods that use and assess spectrally resolved diffusely reflected light for breast cancer detection in the intraoperative setting. Following PROSPERO registration (CRD42022356216), an electronic search was performed. The modalities searched for were diffuse reflectance spectroscopy (DRS), multispectral imaging (MSI), hyperspectral imaging (HSI), and spatial frequency domain imaging (SFDI). The inclusion criteria encompassed studies of human in vivo or ex vivo breast tissues, which presented data on accuracy. The exclusion criteria were contrast use, frozen samples, and other imaging adjuncts. 19 studies were selected following PRISMA guidelines. Studies were divided into point-based (spectroscopy) or whole field-of-view (imaging) techniques. A fixed-or random-effects model analysis generated pooled sensitivity/specificity for the different modalities, following heterogeneity calculations using the Q statistic. Overall, imaging-based techniques had better pooled sensitivity/specificity (0.90 (CI 0.76–1.03)/0.92 (CI 0.78–1.06)) compared with probe-based techniques (0.84 (CI 0.78–0.89)/0.85 (CI 0.79–0.91)). The use of spectrally resolved diffusely reflected light is a rapid, non-contact technique that confers accuracy in discriminating between normal and malignant breast tissue, and it constitutes a potential IMA tool. Full article
Show Figures

Figure 1

28 pages, 9281 KB  
Article
Spectral Correlation and Spatial High–Low Frequency Information of Hyperspectral Image Super-Resolution Network
by Jing Zhang, Renjie Zheng, Xu Chen, Zhaolong Hong, Yunsong Li and Ruitao Lu
Remote Sens. 2023, 15(9), 2472; https://doi.org/10.3390/rs15092472 - 8 May 2023
Cited by 9 | Viewed by 3240
Abstract
Hyperspectral images (HSIs) generally contain tens or even hundreds of spectral segments within a specific frequency range. Due to the limitations and cost of imaging sensors, HSIs often trade spatial resolution for finer band resolution. To compensate for the loss of spatial resolution [...] Read more.
Hyperspectral images (HSIs) generally contain tens or even hundreds of spectral segments within a specific frequency range. Due to the limitations and cost of imaging sensors, HSIs often trade spatial resolution for finer band resolution. To compensate for the loss of spatial resolution and maintain a balance between space and spectrum, existing algorithms were used to obtain excellent results. However, these algorithms could not fully mine the coupling relationship between the spectral domain and spatial domain of HSIs. In this study, we presented a spectral correlation and spatial high–low frequency information of a hyperspectral image super-resolution network (SCSFINet) based on the spectrum-guided attention for analyzing the information already obtained from HSIs. The core of our algorithms was the spectral and spatial feature extraction module (SSFM), consisting of two key elements: (a) spectrum-guided attention fusion (SGAF) using SGSA/SGCA and CFJSF to extract spectral–spatial and spectral–channel joint feature attention, and (b) high- and low-frequency separated multi-level feature fusion (FSMFF) for fusing the multi-level information. In the final stage of upsampling, we proposed the channel grouping and fusion (CGF) module, which can group feature channels and extract and merge features within and between groups to further refine the features and provide finer feature details for sub-pixel convolution. The test on the three general hyperspectral datasets, compared to the existing hyperspectral super-resolution algorithms, suggested the advantage of our method. Full article
Show Figures

Graphical abstract

Back to TopTop