Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = HSI for biology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2390 KiB  
Article
SS-TMNet: Spatial–Spectral Transformer Network with Multi-Scale Convolution for Hyperspectral Image Classification
by Xiaohui Huang, Yunfei Zhou, Xiaofei Yang, Xianhong Zhu and Ke Wang
Remote Sens. 2023, 15(5), 1206; https://doi.org/10.3390/rs15051206 - 22 Feb 2023
Cited by 12 | Viewed by 3487
Abstract
Hyperspectral image (HSI) classification is a significant foundation for remote sensing image analysis, widely used in biology, aerospace, and other applications. Convolution neural networks (CNNs) and attention mechanisms have shown outstanding ability in HSI classification and have been widely studied in recent years. [...] Read more.
Hyperspectral image (HSI) classification is a significant foundation for remote sensing image analysis, widely used in biology, aerospace, and other applications. Convolution neural networks (CNNs) and attention mechanisms have shown outstanding ability in HSI classification and have been widely studied in recent years. However, the existing CNN-based and attention mechanism-based methods cannot fully use spatial–spectral information, which is not conducive to further improving HSI classification accuracy. This paper proposes a new spatial–spectral Transformer network with multi-scale convolution (SS-TMNet), which can effectively extract local and global spatial–spectral information. SS-TMNet includes two key modules, i.e., multi-scale 3D convolution projection module (MSCP) and spatial–spectral attention module (SSAM). The MSCP uses multi-scale 3D convolutions with different depths to extract the fused spatial–spectral features. The spatial–spectral attention module includes three branches: height spatial attention, width spatial attention, and spectral attention, which can extract the fusion information of spatial and spectral features. The proposed SS-TMNet was tested on three widely used HSI datasets: Pavia University, IndianPines, and Houston2013. The experimental results show that the proposed SS-TMNet is superior to the existing methods. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Classification II)
Show Figures

Figure 1

7 pages, 183 KiB  
Editorial
The Future of Hyperspectral Imaging
by Stefano Selci
J. Imaging 2019, 5(11), 84; https://doi.org/10.3390/jimaging5110084 - 25 Oct 2019
Cited by 16 | Viewed by 7644
Abstract
The Special Issue on hyperspectral imaging (HSI), entitled “The Future of Hyperspectral Imaging”, has published 12 papers. Nine papers are related to specific current research and three more are review contributions: In both cases, the request is to propose those methods or instruments [...] Read more.
The Special Issue on hyperspectral imaging (HSI), entitled “The Future of Hyperspectral Imaging”, has published 12 papers. Nine papers are related to specific current research and three more are review contributions: In both cases, the request is to propose those methods or instruments so as to show the future trends of HSI. Some contributions also update specific methodological or mathematical tools. In particular, the review papers address deep learning methods for HSI analysis, while HSI data compression is reviewed by using liquid crystals spectral multiplexing as well as DMD-based Raman spectroscopy. Specific topics explored by using data obtained by HSI include alert on the sprouting of potato tubers, the investigation on the stability of painting samples, the prediction of healing diabetic foot ulcers, and age determination of blood-stained fingerprints. Papers showing advances on more general topics include video approach for HSI dynamic scenes, localization of plant diseases, new methods for the lossless compression of HSI data, the fusing of multiple multiband images, and mixed modes of laser HSI imaging for sorting and quality controls. Full article
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)
22 pages, 4725 KiB  
Article
Forest Types Classification Based on Multi-Source Data Fusion
by Ming Lu, Bin Chen, Xiaohan Liao, Tianxiang Yue, Huanyin Yue, Shengming Ren, Xiaowen Li, Zhen Nie and Bing Xu
Remote Sens. 2017, 9(11), 1153; https://doi.org/10.3390/rs9111153 - 10 Nov 2017
Cited by 39 | Viewed by 9698
Abstract
Forest plays an important role in global carbon, hydrological and atmospheric cycles and provides a wide range of valuable ecosystem services. Timely and accurate forest-type mapping is an essential topic for forest resource inventory supporting forest management, conservation biology and ecological restoration. Despite [...] Read more.
Forest plays an important role in global carbon, hydrological and atmospheric cycles and provides a wide range of valuable ecosystem services. Timely and accurate forest-type mapping is an essential topic for forest resource inventory supporting forest management, conservation biology and ecological restoration. Despite efforts and progress having been made in forest cover mapping using multi-source remotely sensed data, fine spatial, temporal and spectral resolution modeling for forest type distinction is still limited. In this paper, we proposed a novel spatial-temporal-spectral fusion framework through spatial-spectral fusion and spatial-temporal fusion. Addressing the shortcomings of the commonly-used spatial-spectral fusion model, we proposed a novel spatial-spectral fusion model called the Segmented Difference Value method (SEGDV) to generate fine spatial-spectra-resolution images by blending the China environment 1A series satellite (HJ-1A) multispectral image (Charge Coupled Device (CCD)) and Hyperspectral Imager (HSI). A Hierarchical Spatiotemporal Adaptive Fusion Model (HSTAFM) was used to conduct spatial-temporal fusion to generate the fine spatial-temporal-resolution image by blending the HJ-1A CCD and Moderate Resolution Imaging Spectroradiometer (MODIS) data. The spatial-spectral-temporal information was utilized simultaneously to distinguish various forest types. Experimental results of the classification comparison conducted in the Gan River source nature reserves showed that the proposed method could enhance spatial, temporal and spectral information effectively, and the fused dataset yielded the highest classification accuracy of 83.6% compared with the classification results derived from single Landsat-8 (69.95%), single spatial-spectral fusion (70.95%) and single spatial-temporal fusion (78.94%) images, thereby indicating that the proposed method could be valid and applicable in forest type classification. Full article
Show Figures

Figure 1

Back to TopTop