remotesensing-logo

Journal Browser

Journal Browser

Deep Learning for the Analysis of Multi-/Hyperspectral Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 32793

Special Issue Editors


grade E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
Interests: image super-resolution; image denoising; video processing; hyperspectral image analysis; image fusion; visual recognition; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: deep learning; remote sensing image processing and analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unlike human eyes, which can only be exposed to visible light, multi-/hyperspectral imaging is an imaging technique used for the collection and processing of information across a large portion of the electromagnetic spectrum. Multi-/hyperspectral images have a strong spectral diagnostic capability to distinguish materials that, to humans, look similar. Over the past few years, deep learning has been powering many aspects of remote sensing image processing applications ranging from low-level restoration to high-level analysis, and remarkable breakthroughs have been achieved by deep learning-based approaches.

This Special Issue invites manuscripts that present new deep learning models or introduce the most advanced deep networks for processing and analyzing multi-/hyperspectral images. As this is a broad area, there are no constraints regarding the field of application. Articles for this Special Issue on deep learning for the analysis of multi-/hyperspectral images may address, but are not limited, to the following topics:

  • Spatial/Spectral Super-Resoluion
  • Image Fusion/Pansharpening
  • Image Denoising/Destriping
  • Image Registration/Matching
  • Compressive Sensing
  • Computational Imaging
  • Image/Sense Classification
  • Object Detection
  • Clustering
  • Segmentation

Prof. Dr. Junjun Jiang
Prof. Dr. Leyuan Fang
Prof. Dr. Jiayi Ma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • neural network
  • image processing
  • image analysis
  • multispectral image
  • hyperspectral image

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 19287 KiB  
Article
A Multi-Attention Autoencoder for Hyperspectral Unmixing Based on the Extended Linear Mixing Model
by Lijuan Su, Jun Liu, Yan Yuan and Qiyue Chen
Remote Sens. 2023, 15(11), 2898; https://doi.org/10.3390/rs15112898 - 02 Jun 2023
Cited by 5 | Viewed by 1702
Abstract
Hyperspectral unmixing, which decomposes mixed pixels into the endmembers and corresponding abundances, is an important image process for the further application of hyperspectral images (HSIs). Lately, the unmixing problem has been solved using deep learning techniques, particularly autoencoders (AEs). However, the majority of [...] Read more.
Hyperspectral unmixing, which decomposes mixed pixels into the endmembers and corresponding abundances, is an important image process for the further application of hyperspectral images (HSIs). Lately, the unmixing problem has been solved using deep learning techniques, particularly autoencoders (AEs). However, the majority of them are based on the simple linear mixing model (LMM), which disregards the spectral variability of endmembers in different pixels. In this article, we present a multi-attention AE network (MAAENet) based on the extended LMM to address the issue of the spectral variability problem in real scenes. Moreover, the majority of AE networks ignore the global spatial information in HSIs and operate pixel- or patch-wise. We employ attention mechanisms to design a spatial–spectral attention (SSA) module that can deal with the band redundancy in HSIs and extract global spatial features through spectral correlation. Moreover, noticing that the mixed pixels are always present in the intersection of different materials, a novel sparse constraint based on spatial homogeneity is designed to constrain the abundance and abstract local spatial features. Ablation experiments are conducted to verify the effectiveness of the proposed AE structure, SSA module, and sparse constraint. The proposed method is compared with several state-of-the-art unmixing methods and exhibits competitiveness on both synthetic and real datasets. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

25 pages, 4368 KiB  
Article
Reconstruction of Compressed Hyperspectral Image Using SqueezeNet Coupled Dense Attentional Net
by Divya Mohan, J. Aravinth and Sankaran Rajendran
Remote Sens. 2023, 15(11), 2734; https://doi.org/10.3390/rs15112734 - 24 May 2023
Cited by 5 | Viewed by 1299
Abstract
This study addresses image denoising alongside the compression and reconstruction of hyperspectral images (HSIs) using deep learning techniques, since the research community is striving to produce effective results to utilize hyperspectral data. Here, the SqueezeNet architecture is trained with a Gaussian noise model [...] Read more.
This study addresses image denoising alongside the compression and reconstruction of hyperspectral images (HSIs) using deep learning techniques, since the research community is striving to produce effective results to utilize hyperspectral data. Here, the SqueezeNet architecture is trained with a Gaussian noise model to predict and discriminate noisy pixels of HSI to obtain a clean image as output. The denoised image is further processed by the tunable spectral filter (TSF), which is a dual-level prediction filter to produce a compressed image. Subsequently, the compressed image is analyzed through a dense attentional net (DAN) model for reconstruction by reverse dual-level prediction operation. All the proposed mechanisms are employed in Python and evaluated using a Ben-Gurion University-Interdisciplinary Computational Vision Laboratory (BGU-ICVL) dataset. The results of SqueezeNet architecture applied to the dataset produced the denoised output with a Peak Signal to Noise Ratio (PSNR) value of 45.43 dB. The TSF implemented to the denoised images provided compression with a Mean Square Error (MSE) value of 8.334. Subsequently, the DAN model executed and produced reconstructed images with a Structural Similarity Index Measure (SSIM) value of 0.9964 dB. The study proved that each stage of the proposed approach resulted in a quality output, and the developed model is more effective to further utilize the HSI. This model can be well utilized using HSI data for mineral exploration. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

22 pages, 85270 KiB  
Article
Two-Way Generation of High-Resolution EO and SAR Images via Dual Distortion-Adaptive GANs
by Yuanyuan Qing, Jiang Zhu, Hongchuan Feng, Weixian Liu and Bihan Wen
Remote Sens. 2023, 15(7), 1878; https://doi.org/10.3390/rs15071878 - 31 Mar 2023
Cited by 1 | Viewed by 2433
Abstract
Synthetic aperture radar (SAR) provides an all-weather and all-time imaging platform, which is more reliable than electro-optical (EO) remote sensing imagery under extreme weather/lighting conditions. While many large-scale EO-based remote sensing datasets have been released for computer vision tasks, there are few publicly [...] Read more.
Synthetic aperture radar (SAR) provides an all-weather and all-time imaging platform, which is more reliable than electro-optical (EO) remote sensing imagery under extreme weather/lighting conditions. While many large-scale EO-based remote sensing datasets have been released for computer vision tasks, there are few publicly available SAR image datasets due to the high costs associated with acquisition and labeling. Recent works have applied deep learning methods for image translation between SAR and EO. However, the effectiveness of those techniques on high-resolution images has been hindered by a common limitation. Non-linear geometric distortions, induced by different imaging principles of optical and radar sensors, have caused insufficient pixel-wise correspondence between an EO-SAR patch pair. Such a phenomenon is not prominent in low-resolution EO-SAR datasets, e.g., SEN1-2, one of the most frequently used datasets, and thus has been seldom discussed. To address this issue, a new dataset SN6-SAROPT with sub-meter resolution is introduced, and a novel image translation algorithm designed to tackle geometric distortions adaptively is proposed in this paper. Extensive experiments have been conducted to evaluate the proposed algorithm, and the results have validated its superiority over other methods for both SAR to EO (S2E) and EO to SAR (E2S) tasks, especially for urban areas in high-resolution images. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Graphical abstract

22 pages, 63169 KiB  
Article
PatchMask: A Data Augmentation Strategy with Gaussian Noise in Hyperspectral Images
by Hong-Xia Dou, Xing-Shun Lu, Chao Wang, Hao-Zhen Shen, Yu-Wei Zhuo and Liang-Jian Deng
Remote Sens. 2022, 14(24), 6308; https://doi.org/10.3390/rs14246308 - 13 Dec 2022
Cited by 4 | Viewed by 2072
Abstract
Data augmentation (DA) is an effective way to enrich the richness of data and improve a model’s generalization ability. It has been widely used in many advanced vision tasks (e.g., classification, recognition, etc.), while it can hardly be seen in hyperspectral image (HSI) [...] Read more.
Data augmentation (DA) is an effective way to enrich the richness of data and improve a model’s generalization ability. It has been widely used in many advanced vision tasks (e.g., classification, recognition, etc.), while it can hardly be seen in hyperspectral image (HSI) tasks. In this paper, we analyze whether existing augmentation methods are suitable for the task of HSI denoising and find that the biggest challenge lies in neither losing the spatial information of the original image nor destroying the correlation between the various bands for HSI denoising. Based on this, a new data augmentation method named PatchMask is proposed, which makes the training samples as diverse as possible while preserving the spatial and spectral information. The training data augmented by this method are somewhere between clear and noisy, which can make the network learn more effectively and generalize. Experiments demonstrate that our method outperforms other data augmentation methods, such as the benchmark CutBlur, in enhancing HSI denoising. In addition, the given DA method was used on several popular denoising networks, such as QRNN3D, DnCNN, MPRnet, CBDNet, and HSID-CNN, to verify the effectiveness of the proposed method. The results show that the given DA could increase the value of the PSNR by 0.2∼0.5 dB in various examples. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

20 pages, 2909 KiB  
Article
Dual-Branch Attention-Assisted CNN for Hyperspectral Image Classification
by Wei Huang, Zhuobing Zhao, Le Sun and Ming Ju
Remote Sens. 2022, 14(23), 6158; https://doi.org/10.3390/rs14236158 - 05 Dec 2022
Cited by 6 | Viewed by 1796
Abstract
Convolutional neural network (CNN)-based hyperspectral image (HSI) classification models have developed rapidly in recent years due to their superiority. However, recent deep learning methods based on CNN tend to be deep networks with multiple parameters, which inevitably resulted in information redundancy and increased [...] Read more.
Convolutional neural network (CNN)-based hyperspectral image (HSI) classification models have developed rapidly in recent years due to their superiority. However, recent deep learning methods based on CNN tend to be deep networks with multiple parameters, which inevitably resulted in information redundancy and increased computational cost. We propose a dual-branch attention-assisted CNN (DBAA-CNN) for HSI classification to address these problems. The network consists of spatial-spectral and spectral attention branches. The spatial-spectral branch integrates multi-scale spatial information with cross-channel attention by extracting spatial–spectral information jointly utilizing a 3-D CNN and a pyramid squeeze-and-excitation attention (PSA) module. The spectral branch maps the original features to the spectral interaction space for feature representation and learning by adding an attention module. Finally, the spectral and spatial features are combined and input into the linear layer to generate the sample label. We conducted tests with three common hyperspectral datasets to test the efficacy of the framework. Our method outperformed state-of-the-art HSI classification algorithms based on classification accuracy and processing time. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

17 pages, 2478 KiB  
Article
CSCE-Net: Channel-Spatial Contextual Enhancement Network for Robust Point Cloud Registration
by Jingtao Wang, Changcai Yang, Lifang Wei and Riqing Chen
Remote Sens. 2022, 14(22), 5751; https://doi.org/10.3390/rs14225751 - 14 Nov 2022
Cited by 2 | Viewed by 1695
Abstract
Seeking reliable correspondences between two scenes is crucial for solving feature-based point cloud registration tasks. In this paper, we propose a novel outlier rejection network, called Channel-Spatial Contextual Enhancement Network (CSCE-Net), to obtain rich contextual information on correspondences, which can effectively remove outliers [...] Read more.
Seeking reliable correspondences between two scenes is crucial for solving feature-based point cloud registration tasks. In this paper, we propose a novel outlier rejection network, called Channel-Spatial Contextual Enhancement Network (CSCE-Net), to obtain rich contextual information on correspondences, which can effectively remove outliers and improve the accuracy of point cloud registration. To be specific, we design a novel channel-spatial contextual (CSC) block, which is mainly composed of the Channel-Spatial Attention (CSA) layer and the Nonlocal Channel-Spatial Attention (Nonlocal CSA) layer. The CSC block is able to obtain more reliable contextual information, in which the CSA layer can selectively aggregate the mutual information between the channel and spatial dimensions. The Nonlocal CSA layer can compute feature similarity and spatial consistency for each correspondence, and the CSA layer and Nonlocal CSA layer can support each other. In addition, to improve the distinguishing ability between inliers and outliers, we present an advanced seed selection mechanism to select more dependable initial correspondences. Extensive experiments demonstrate that CSCE-Net outperforms state-of-the-art methods for outlier rejection and pose estimation tasks on public datasets with varying 3D local descriptors. In addition, the network parameters of CSCE-Net are reduced from 1.05M to 0.56M compared to the recently learning-based outlier rejection method PointDSC. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Graphical abstract

23 pages, 13349 KiB  
Article
A Lightweight Multi-Level Information Network for Multispectral and Hyperspectral Image Fusion
by Mingming Ma, Yi Niu, Chang Liu, Fu Li and Guangming Shi
Remote Sens. 2022, 14(21), 5600; https://doi.org/10.3390/rs14215600 - 06 Nov 2022
Viewed by 1671
Abstract
The process of fusing the rich spectral information of a low spatial resolution hyperspectral image (LR-HSI) with the spatial information of a high spatial resolution multispectral image (HR-MSI) to obtain an HSI with the spatial resolution of an MSI image is called hyperspectral [...] Read more.
The process of fusing the rich spectral information of a low spatial resolution hyperspectral image (LR-HSI) with the spatial information of a high spatial resolution multispectral image (HR-MSI) to obtain an HSI with the spatial resolution of an MSI image is called hyperspectral image fusion (HIF). To reconstruct hyperspectral images at video frame rate, we propose a lightweight multi-level information network (MINet) for multispectral and hyperspectral image fusion. Specifically, we develop a novel lightweight feature fusion model, namely residual constraint block based on global variance fine-tuning (GVF-RCB), to complete the feature extraction and fusion of hyperspectral images. Further, we define a residual activity factor to judge the learning ability of the residual module, thereby verifying the effectiveness of GVF-RCB. In addition, we use cascade cross-level fusion to embed the different spectral bands of the upsampled LR-HSI in a progressive manner to compensate for lost spectral information at different levels and to maintain spatial high frequency information at all times. Experiments on different datasets show that our MINet outperforms the state-of-the-art methods in terms of objective metrics, in particular by requiring only 30% of the running time and 20% of the number of parameters. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Graphical abstract

28 pages, 35998 KiB  
Article
Supervised Contrastive Learning-Based Classification for Hyperspectral Image
by Lingbo Huang, Yushi Chen, Xin He and Pedram Ghamisi
Remote Sens. 2022, 14(21), 5530; https://doi.org/10.3390/rs14215530 - 02 Nov 2022
Cited by 1 | Viewed by 2301
Abstract
Recently, deep learning methods, especially convolutional neural networks (CNNs), have achieved good performance for hyperspectral image (HSI) classification. However, due to limited training samples of HSIs and the high volume of trainable parameters in deep models, training deep CNN-based models is still a [...] Read more.
Recently, deep learning methods, especially convolutional neural networks (CNNs), have achieved good performance for hyperspectral image (HSI) classification. However, due to limited training samples of HSIs and the high volume of trainable parameters in deep models, training deep CNN-based models is still a challenge. To address this issue, this study investigates contrastive learning (CL) as a pre-training strategy for HSI classification. Specifically, a supervised contrastive learning (SCL) framework, which pre-trains a feature encoder using an arbitrary number of positive and negative samples in a pair-wise optimization perspective, is proposed. Additionally, three techniques for better generalization in the case of limited training samples are explored in the proposed SCL framework. First, a spatial–spectral HSI data augmentation method, which is composed of multiscale and 3D random occlusion, is designed to generate diverse views for each HSI sample. Second, the features of the augmented views are stored in a queue during training, which enriches the positives and negatives in a mini-batch and thus leads to better convergence. Third, a multi-level similarity regularization method (MSR) combined with SCL (SCL–MSR) is proposed to regularize the similarities of the data pairs. After pre-training, a fully connected layer is combined with the pre-trained encoder to form a new network, which is then fine-tuned for final classification. The proposed methods (SCL and SCL–MSR) are evaluated on four widely used hyperspectral datasets: Indian Pines, Pavia University, Houston, and Chikusei. The experiment results show that the proposed SCL-based methods provide competitive classification accuracy compared to the state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

24 pages, 6375 KiB  
Article
An Enhanced Spectral Fusion 3D CNN Model for Hyperspectral Image Classification
by Junbo Zhou, Shan Zeng, Zuyin Xiao, Jinbo Zhou, Hao Li and Zhen Kang
Remote Sens. 2022, 14(21), 5334; https://doi.org/10.3390/rs14215334 - 25 Oct 2022
Cited by 5 | Viewed by 1967
Abstract
With the continuous development of hyperspectral image technology and deep learning methods in recent years, an increasing number of hyperspectral image classification models have been proposed. However, due to the numerous spectral dimensions of hyperspectral images, most classification models suffer from issues such [...] Read more.
With the continuous development of hyperspectral image technology and deep learning methods in recent years, an increasing number of hyperspectral image classification models have been proposed. However, due to the numerous spectral dimensions of hyperspectral images, most classification models suffer from issues such as breaking spectral continuity and poor learning of spectral information. In this paper, we propose a new classification model called the enhanced spectral fusion network (ESFNet), which contains two parts: an optimized multi-scale fused spectral attention module (FsSE) and a 3D convolutional neural network (3D CNN) based on the fusion of different spectral strides (SSFCNN). Specifically, after sampling the hyperspectral images, our model first implements the weighting of the spectral information through the FsSE module to obtain spectral data with a higher degree of information richness. Then, the weighted spectral data are fed into the SSFCNN to realize the effective learning of spectral features. The new model can maximize the retention of spectral continuity and enhance the spectral information while being able to better utilize the enhanced information to improve the model’s ability to learn hyperspectral image features, thus improving the classification accuracy of the model. Experiment results on the Indian Pines and Pavia University datasets demonstrated that our method outperforms other relevant baselines in terms of classification accuracy and generalization performance. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

24 pages, 9275 KiB  
Article
Nonlinear Unmixing via Deep Autoencoder Networks for Generalized Bilinear Model
by Jinhua Zhang, Xiaohua Zhang, Hongyun Meng, Caihao Sun, Li Wang and Xianghai Cao
Remote Sens. 2022, 14(20), 5167; https://doi.org/10.3390/rs14205167 - 15 Oct 2022
Cited by 5 | Viewed by 1720
Abstract
Hyperspectral unmixing decomposes the observed mixed spectra into a collection of constituent pure material signatures and the associated fractional abundances. Because of the universal modeling ability of neural networks, deep learning (DL) techniques are gaining prominence in solving hyperspectral analysis tasks. The autoencoder [...] Read more.
Hyperspectral unmixing decomposes the observed mixed spectra into a collection of constituent pure material signatures and the associated fractional abundances. Because of the universal modeling ability of neural networks, deep learning (DL) techniques are gaining prominence in solving hyperspectral analysis tasks. The autoencoder (AE) network has been extensively investigated in linear blind source unmixing. However, the linear mixing model (LMM) may fail to provide good unmixing performance when the nonlinear mixing effects are nonnegligible in complex scenarios. Considering the limitations of LMM, we propose an unsupervised nonlinear spectral unmixing method, based on autoencoder architecture. Firstly, a deep neural network is employed as the encoder to extract the low-dimension feature of the mixed pixel. Then, the generalized bilinear model (GBM) is used to design the decoder, which has a linear mixing part and a nonlinear mixing one. The coefficient of the bilinear mixing part can be adjusted by a set of learnable parameters, which makes the method perform well on both nonlinear and linear data. Finally, some regular terms are imposed on the loss function and an alternating update strategy is utilized to train the network. Experimental results on synthetic and real datasets verify the effectiveness of the proposed model and show very competitive performance compared with several existing algorithms. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

15 pages, 2442 KiB  
Article
An Application of Hyperspectral Image Clustering Based on Texture-Aware Superpixel Technique in Deep Sea
by Panjian Ye, Chenhua Han, Qizhong Zhang, Farong Gao, Zhangyi Yang and Guanghai Wu
Remote Sens. 2022, 14(19), 5047; https://doi.org/10.3390/rs14195047 - 10 Oct 2022
Cited by 2 | Viewed by 1353
Abstract
This paper aims to study the application of hyperspectral technology in the classification of deep-sea manganese nodules. Considering the spectral spatial variation of hyperspectral images, the difficulty of label acquisition, and the inability to guarantee stable illumination in deep-sea environments. This paper proposes [...] Read more.
This paper aims to study the application of hyperspectral technology in the classification of deep-sea manganese nodules. Considering the spectral spatial variation of hyperspectral images, the difficulty of label acquisition, and the inability to guarantee stable illumination in deep-sea environments. This paper proposes a local binary pattern manifold superpixel-based fuzzy clustering method (LMSLIC-FCM). Firstly, we introduce a uniform local binary pattern (ULBP) to design a superpixel algorithm (LMSLIC) that is insensitive to illumination and has texture perception. Secondly, the weighted feature and the mean feature are fused as the representative features of superpixels. Finally, it is fused with fuzzy clustering method (FCM) to obtain a superpixel-based clustering algorithm LMSLIC-FCM. To verify the feasibility of LMSLIC-FCM on deep-sea manganese nodule data, the experiments were conducted on three different types of manganese nodule data. The average identification rate of LMSLIC-FCM reached 83.8%, and the average true positive rate reached 93.3%, which was preferable to the previous algorithms. Therefore, LMSLIC-FCM is effective in the classification of manganese nodules. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Graphical abstract

20 pages, 7679 KiB  
Article
A Local and Nonlocal Feature Interaction Network for Pansharpening
by Junru Yin, Jiantao Qu, Le Sun, Wei Huang and Qiqiang Chen
Remote Sens. 2022, 14(15), 3743; https://doi.org/10.3390/rs14153743 - 04 Aug 2022
Cited by 3 | Viewed by 1367
Abstract
Pansharpening based on deep learning (DL) has shown great advantages. Most convolutional neural network (CNN)-based methods focus on obtaining local features from multispectral (MS) and panchromatic (PAN) images, but ignore the nonlocal dependence on images. Therefore, Transformer-based methods are introduced to obtain long-range [...] Read more.
Pansharpening based on deep learning (DL) has shown great advantages. Most convolutional neural network (CNN)-based methods focus on obtaining local features from multispectral (MS) and panchromatic (PAN) images, but ignore the nonlocal dependence on images. Therefore, Transformer-based methods are introduced to obtain long-range information on images. However, the representational capabilities of features extracted by CNN or Transformer alone are weak. To solve this problem, a local and nonlocal feature interaction network (LNFIN) is proposed in this paper for pansharpening. It comprises Transformer and CNN branches. Furthermore, a feature interaction module (FIM) is proposed to fuse different features and return to the two branches to enhance the representational capability of features. Specifically, a CNN branch consisting of multiscale dense modules (MDMs) is proposed for acquiring local features of the image, and a Transformer branch consisting of pansharpening Transformer modules (PTMs) is introduced for acquiring nonlocal features of the image. In addition, inspired by the PTM, a shift pansharpening Transformer module (SPTM) is proposed for the learning of texture features to further enhance the spatial representation of features. The LNFIN outperforms the state-of-the-art method experimentally on three datasets. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Graphical abstract

22 pages, 5157 KiB  
Article
Total Carbon Content Assessed by UAS Near-Infrared Imagery as a New Fire Severity Metric
by Anna Brook, Seham Hamzi, Dar Roberts, Charles Ichoku, Nurit Shtober-Zisu and Lea Wittenberg
Remote Sens. 2022, 14(15), 3632; https://doi.org/10.3390/rs14153632 - 29 Jul 2022
Cited by 2 | Viewed by 2154
Abstract
The ash produced by forest fires is a complex mixture of organic and inorganic particles with many properties. Amounts of ash and char are used to roughly evaluate the impacts of a fire on nutrient cycling and ecosystem recovery. Numerous studies have suggested [...] Read more.
The ash produced by forest fires is a complex mixture of organic and inorganic particles with many properties. Amounts of ash and char are used to roughly evaluate the impacts of a fire on nutrient cycling and ecosystem recovery. Numerous studies have suggested that fire severity can be assessed by measuring changes in ash characteristics. Traditional methods to determine fire severity are based on in situ observations, and visual approximation of changes in the forest floor and soil which are both laborious and subjective. These measures primarily reflect the level of consumption of organic layers, the deposition of ash, particularly its depth and color, and fire-induced changes in the soil. Recent studies suggested adding remote sensing techniques to the field observations and using machine learning and spectral indices to assess the effects of fires on ecosystems. While index thresholding can be easily implemented, its effectiveness over large areas is limited to pattern coverage of forest type and fire regimes. Machine learning algorithms, on the other hand, allow multivariate classifications, but learning is complex and time-consuming when analyzing space-time series. Therefore, there is currently no consensus regarding a quantitative index of fire severity. Considering that wildfires play a major role in controlling forest carbon storage and cycling in fire-suppressed forests, this study examines the use of low-cost multispectral imagery across visible and near-infrared regions collected by unmanned aerial systems to determine fire severity according to the color and chemical properties of vegetation ash. The use of multispectral imagery data might reduce the lack of precision that is part of manual color matching and produce a vast and accurate spatio-temporal severity map. The suggested severity map is based on spectral information used to evaluate chemical/mineralogical changes by deep learning algorithms. These methods quantify total carbon content and assess the corresponding fire intensity that is required to form a particular residue. By designing three learning algorithms (PLS-DA, ANN, and 1-D CNN) for two datasets (RGB images and Munsell color versus Unmanned Aerial System (UAS)-based multispectral imagery) the multispectral prediction results were excellent. Therefore, deep network-based near-infrared remote sensing technology has the potential to become an alternative reliable method to assess fire severity. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 9820 KiB  
Review
A Review of Deep-Learning Methods for Change Detection in Multispectral Remote Sensing Images
by Eleonora Jonasova Parelius
Remote Sens. 2023, 15(8), 2092; https://doi.org/10.3390/rs15082092 - 16 Apr 2023
Cited by 7 | Viewed by 5810
Abstract
Remote sensing is a tool of interest for a large variety of applications. It is becoming increasingly more useful with the growing amount of available remote sensing data. However, the large amount of data also leads to a need for improved automated analysis. [...] Read more.
Remote sensing is a tool of interest for a large variety of applications. It is becoming increasingly more useful with the growing amount of available remote sensing data. However, the large amount of data also leads to a need for improved automated analysis. Deep learning is a natural candidate for solving this need. Change detection in remote sensing is a rapidly evolving area of interest that is relevant for a number of fields. Recent years have seen a large number of publications and progress, even though the challenge is far from solved. This review focuses on deep learning applied to the task of change detection in multispectral remote-sensing images. It provides an overview of open datasets designed for change detection as well as a discussion of selected models developed for this task—including supervised, semi-supervised and unsupervised. Furthermore, the challenges and trends in the field are reviewed, and possible future developments are considered. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images)
Show Figures

Figure 1

Back to TopTop