remotesensing-logo

Journal Browser

Journal Browser

Advanced Machine Learning and Deep Learning Approaches for Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (20 November 2022) | Viewed by 50951

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing is the acquisition of information about an object or phenomenon without making physical contact. Artificial intelligence such as machine learning and deep learning have shown potential to overcome the challenges of remote sensing signal, image, and video processing. Artificial intelligence approaches need huge computing power as they normally use GPUs. Thanks to research efforts, recent advances in remote sensing have led to high-resolution monitoring of Earth on a global scale, providing a massive amount of Earth-observation data. We trust that artificial intelligence, machine learning, and deep learning approaches will provide promising tools to overcome many challenges in remote sensing in terms of accuracy and reliability at high speeds.

This Special Issue is the third edition of “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”. In this third edition, our new Special Issue aims to report the latest advances and trends concerning advanced machine learning and deep learning techniques in relation to remote sensing data-processing issues. Papers of both theoretical and applicative nature, as well as contributions regarding new advanced artificial learning and data science techniques for the remote sensing research community, are welcome.

Both original research articles and review articles are welcome for submission.

Dr. Gwanggil Jeon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • remote sensing
  • signal/image processing
  • deep learning
  • artificial intelligence
  • time series processing

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

6 pages, 185 KiB  
Editorial
Advanced Machine Learning and Deep Learning Approaches for Remote Sensing
by Gwanggil Jeon
Remote Sens. 2023, 15(11), 2876; https://doi.org/10.3390/rs15112876 - 1 Jun 2023
Viewed by 1499
Abstract
Unlike field observation or field sensing, remote sensing is the process of obtaining information about an object or phenomenon without making physical contact [...] Full article

Research

Jump to: Editorial

17 pages, 5456 KiB  
Article
Cloud Removal from Satellite Images Using a Deep Learning Model with the Cloud-Matting Method
by Deying Ma, Renzhe Wu, Dongsheng Xiao and Baikai Sui
Remote Sens. 2023, 15(4), 904; https://doi.org/10.3390/rs15040904 - 6 Feb 2023
Cited by 10 | Viewed by 4348
Abstract
Clouds seriously limit the application of optical remote sensing images. In this paper, we remove clouds from satellite images using a novel method that considers ground surface reflections and cloud top reflections as a linear mixture of image elements from the perspective of [...] Read more.
Clouds seriously limit the application of optical remote sensing images. In this paper, we remove clouds from satellite images using a novel method that considers ground surface reflections and cloud top reflections as a linear mixture of image elements from the perspective of image superposition. We use a two-step convolutional neural network to extract the transparency information of clouds and then recover the ground surface information of thin cloud regions. Given the poor balance of the generated samples, this paper also improves the binary Tversky loss function and applies it on multi-classification tasks. The model was validated on the simulated dataset and ALCD dataset, respectively. The results show that this model outperformed other control group experiments in cloud detection and removal. The model better locates the clouds in images with cloud matting, which is built based on cloud detection. In addition, the model successfully recovers the surface information of the thin cloud region when thick and thin clouds coexist, and it does not damage the original image’s information. Full article
Show Figures

Figure 1

16 pages, 7312 KiB  
Article
RCCT-ASPPNet: Dual-Encoder Remote Image Segmentation Based on Transformer and ASPP
by Yazhou Li, Zhiyou Cheng, Chuanjian Wang, Jinling Zhao and Linsheng Huang
Remote Sens. 2023, 15(2), 379; https://doi.org/10.3390/rs15020379 - 7 Jan 2023
Cited by 15 | Viewed by 2749
Abstract
Remote image semantic segmentation technology is one of the core research elements in the field of computer vision and has a wide range of applications in production life. Most remote image semantic segmentation methods are based on CNN. Recently, Transformer provided a view [...] Read more.
Remote image semantic segmentation technology is one of the core research elements in the field of computer vision and has a wide range of applications in production life. Most remote image semantic segmentation methods are based on CNN. Recently, Transformer provided a view of long-distance dependencies in images. In this paper, we propose RCCT-ASPPNet, which includes the dual-encoder structure of Residual Multiscale Channel Cross-Fusion with Transformer (RCCT) and Atrous Spatial Pyramid Pooling (ASPP). RCCT uses Transformer to cross fuse global multiscale semantic information; the residual structure is then used to connect the inputs and outputs. ASPP based on CNN extracts contextual information of high-level semantics from different perspectives and uses Convolutional Block Attention Module (CBAM) to extract spatial and channel information, which will further improve the model segmentation ability. The experimental results show that the mIoU of our method is 94.14% and 61.30% on the datasets Farmland and AeroScapes, respectively, and that the mPA is 97.12% and 84.36%, respectively, both outperforming DeepLabV3+ and UCTransNet. Full article
Show Figures

Figure 1

20 pages, 3273 KiB  
Article
Azimuth-Aware Discriminative Representation Learning for Semi-Supervised Few-Shot SAR Vehicle Recognition
by Linbin Zhang, Xiangguang Leng, Sijia Feng, Xiaojie Ma, Kefeng Ji, Gangyao Kuang and Li Liu
Remote Sens. 2023, 15(2), 331; https://doi.org/10.3390/rs15020331 - 5 Jan 2023
Cited by 10 | Viewed by 1902
Abstract
Among the current methods of synthetic aperture radar (SAR) automatic target recognition (ATR), unlabeled measured data and labeled simulated data are widely used to elevate the performance of SAR ATR. In view of this, the setting of semi-supervised few-shot SAR vehicle recognition is [...] Read more.
Among the current methods of synthetic aperture radar (SAR) automatic target recognition (ATR), unlabeled measured data and labeled simulated data are widely used to elevate the performance of SAR ATR. In view of this, the setting of semi-supervised few-shot SAR vehicle recognition is proposed to use these two forms of data to cope with the problem that few labeled measured data are available, which is a pioneering work in this field. In allusion to the sensitivity of poses of SAR vehicles, especially in the situation of only a few labeled data, we design two azimuth-aware discriminative representation (AADR) losses that suppress intra-class variations of samples with huge azimuth-angle differences, while simultaneously enlarging inter-class differences of samples with the same azimuth angle in the feature-embedding space via cosine similarity. Unlabeled measured data from the MSTAR dataset are labeled with pseudo-labels from categories among the SARSIM dataset and SAMPLE dataset, and these two forms of data are taken into consideration in the proposed loss. The few labeled samples in experimental settings are randomly selected in the training set. The phase data and amplitude data of SAR targets are all taken into consideration in this article. The proposed method achieves 71.05%, 86.09%, and 66.63% under 4-way 1-shot in EOC1 (Extended Operating Condition), EOC2/C, and EOC2/V, respectively, which overcomes other few-shot learning (FSL) and semi-supervised few-shot learning (SSFSL) methods in classification accuracy. Full article
Show Figures

Figure 1

17 pages, 12419 KiB  
Article
FlexibleNet: A New Lightweight Convolutional Neural Network Model for Estimating Carbon Sequestration Qualitatively Using Remote Sensing
by Mohamad M. Awad
Remote Sens. 2023, 15(1), 272; https://doi.org/10.3390/rs15010272 - 2 Jan 2023
Cited by 7 | Viewed by 2736
Abstract
Many heavy and lightweight convolutional neural networks (CNNs) require large datasets and parameter tuning. Moreover, they consume time and computer resources. A new lightweight model called FlexibleNet was created to overcome these obstacles. The new lightweight model is a CNN scaling-based model (width, [...] Read more.
Many heavy and lightweight convolutional neural networks (CNNs) require large datasets and parameter tuning. Moreover, they consume time and computer resources. A new lightweight model called FlexibleNet was created to overcome these obstacles. The new lightweight model is a CNN scaling-based model (width, depth, and resolution). Unlike the conventional practice, which arbitrarily scales these factors, FlexibleNet uniformly scales the network width, depth, and resolution with a set of fixed scaling coefficients. The new model was tested by qualitatively estimating sequestered carbon in the aboveground forest biomass from Sentinel-2 images. We also created three different sizes of training datasets. The new training datasets consisted of six qualitative categories (no carbon, very low, low, medium, high, and very high). The results showed that FlexibleNet was better or comparable to the other lightweight or heavy CNN models concerning the number of parameters and time requirements. Moreover, FlexibleNet had the highest accuracy compared to these CNN models. Finally, the FlexibleNet model showed robustness and low parameter tuning requirements when a small dataset was provided for training compared to other models. Full article
Show Figures

Figure 1

20 pages, 56911 KiB  
Article
Capacity Estimation of Solar Farms Using Deep Learning on High-Resolution Satellite Imagery
by Rashmi Ravishankar, Elaf AlMahmoud, Abdulelah Habib and Olivier L. de Weck
Remote Sens. 2023, 15(1), 210; https://doi.org/10.3390/rs15010210 - 30 Dec 2022
Cited by 6 | Viewed by 4188
Abstract
Global solar photovoltaic capacity has consistently doubled every 18 months over the last two decades, going from 0.3 GW in 2000 to 643 GW in 2019, and is forecast to reach 4240 GW by 2040. However, these numbers are uncertain, and virtually all [...] Read more.
Global solar photovoltaic capacity has consistently doubled every 18 months over the last two decades, going from 0.3 GW in 2000 to 643 GW in 2019, and is forecast to reach 4240 GW by 2040. However, these numbers are uncertain, and virtually all reporting on deployments lacks a unified source of either information or validation. In this paper, we propose, optimize, and validate a deep learning framework to detect and map solar farms using a state-of-the-art semantic segmentation convolutional neural network applied to satellite imagery. As a final step in the pipeline, we propose a model to estimate the energy generation capacity of the detected solar energy facilities. Objectively, the deep learning model achieved highly competitive performance indicators, including a mean accuracy of 96.87%, and a Jaccard Index (intersection over union of classified pixels) score of 95.5%. Subjectively, it was found to detect spaces between panels producing a segmentation output at a sub-farm level that was better than human labeling. Finally, the detected areas and predicted generation capacities were validated against publicly available data to within an average error of 4.5% Deep learning applied specifically for the detection and mapping of solar farms is an active area of research, and this deep learning capacity evaluation pipeline is one of the first of its kind. We also share an original dataset of overhead solar farm satellite imagery comprising 23,000 images (256 × 256 pixels each), and the corresponding labels upon which the machine learning model was trained. Full article
Show Figures

Figure 1

24 pages, 2532 KiB  
Article
Deep-Learning-Based Feature Extraction Approach for Significant Wave Height Prediction in SAR Mode Altimeter Data
by Ghada Atteia, Michael J. Collins, Abeer D. Algarni and Nagwan Abdel Samee
Remote Sens. 2022, 14(21), 5569; https://doi.org/10.3390/rs14215569 - 4 Nov 2022
Cited by 6 | Viewed by 2473
Abstract
Predicting sea wave parameters such as significant wave height (SWH) has recently been identified as a critical requirement for maritime security and economy. Earth observation satellite missions have resulted in a massive rise in marine data volume and dimensionality. Deep learning technologies have [...] Read more.
Predicting sea wave parameters such as significant wave height (SWH) has recently been identified as a critical requirement for maritime security and economy. Earth observation satellite missions have resulted in a massive rise in marine data volume and dimensionality. Deep learning technologies have proven their capabilities to process large amounts of data, draw useful insights, and assist in environmental decision making. In this study, a new deep-learning-based hybrid feature selection approach is proposed for SWH prediction using satellite Synthetic Aperture Radar (SAR) mode altimeter data. The introduced approach integrates the power of autoencoder deep neural networks in mapping input features into representative latent-space features with the feature selection power of the principal component analysis (PCA) algorithm to create significant features from altimeter observations. Several hybrid feature sets were generated using the proposed approach and utilized for modeling SWH using Gaussian Process Regression (GPR) and Neural Network Regression (NNR). SAR mode altimeter data from the Sentinel-3A mission calibrated by in situ buoy data was used for training and evaluating the SWH models. The significance of the autoencoder-based feature sets in improving the prediction performance of SWH models is investigated against original, traditionally selected, and hybrid features. The autoencoder–PCA hybrid feature set generated by the proposed approach recorded the lowest average RMSE values of 0.11069 for GPR models, which outperforms the state-of-the-art results. The findings of this study reveal the superiority of the autoencoder deep learning network in generating latent features that aid in improving the prediction performance of SWH models over traditional feature extraction methods. Full article
Show Figures

Figure 1

12 pages, 3093 KiB  
Article
Deep-Separation Guided Progressive Reconstruction Network for Semantic Segmentation of Remote Sensing Images
by Jiabao Ma, Wujie Zhou, Xiaohong Qian and Lu Yu
Remote Sens. 2022, 14(21), 5510; https://doi.org/10.3390/rs14215510 - 1 Nov 2022
Cited by 4 | Viewed by 1733
Abstract
The success of deep learning and the segmentation of remote sensing images (RSIs) has improved semantic segmentation in recent years. However, existing RSI segmentation methods have two inherent problems: (1) detecting objects of various scales in RSIs of complex scenes is challenging, and [...] Read more.
The success of deep learning and the segmentation of remote sensing images (RSIs) has improved semantic segmentation in recent years. However, existing RSI segmentation methods have two inherent problems: (1) detecting objects of various scales in RSIs of complex scenes is challenging, and (2) feature reconstruction for accurate segmentation is difficult. To solve these problems, we propose a deep-separation-guided progressive reconstruction network that achieves accurate RSI segmentation. First, we design a decoder comprising progressive reconstruction blocks capturing detailed features at various resolutions through multi-scale features obtained from various receptive fields to preserve accuracy during reconstruction. Subsequently, we propose a deep separation module that distinguishes various classes based on semantic features to use deep features to detect objects of different scales. Moreover, adjacent middle features are complemented during decoding to improve the segmentation performance. Extensive experimental results on two optical RSI datasets show that the proposed network outperforms 11 state-of-the-art methods. Full article
Show Figures

Figure 1

21 pages, 3827 KiB  
Article
A Multi-Dimensional Deep-Learning-Based Evaporation Duct Height Prediction Model Derived from MAGIC Data
by Cheng Yang, Jian Wang and Yafei Shi
Remote Sens. 2022, 14(21), 5484; https://doi.org/10.3390/rs14215484 - 31 Oct 2022
Cited by 3 | Viewed by 1606
Abstract
The evaporation duct height (EDH) can reflect the main characteristics of the near-surface meteorological environment, which is essential for designing a communication system under this propagation mechanism. This study proposes an EDH prediction network with multi-layer perception (MLP). Further, we construct a multi-dimensional [...] Read more.
The evaporation duct height (EDH) can reflect the main characteristics of the near-surface meteorological environment, which is essential for designing a communication system under this propagation mechanism. This study proposes an EDH prediction network with multi-layer perception (MLP). Further, we construct a multi-dimensional EDH prediction model (multilayer-MLP-EDH) for the first time by adding spatial and temporal “extra data” derived from the meteorological measurements. The experimental results show that: (1) compared with the naval-postgraduate-school (NPS) model, the root-mean-square error (RMSE) of the meteorological-MLP-EDH model is reduced to 2.15 m, and the percentage improvement reached 54.00%; (2) spatial and temporal parameters can reduce the RMSE to 1.54 m with an improvement of 66.96%; (3) the multilayer-MLP- EDH model can match measurements well at both large and small scales by attaching meteorological parameters at extra height, the error is further reduced to 1.05 m, with 77.51% improvement compared with the NPS model. The proposed model can significantly improve the prediction accuracy of the EDH and has great potential to improve the communication quality, reliability, and efficiency of ducting in evaporation ducts. Full article
Show Figures

Figure 1

34 pages, 27544 KiB  
Article
A Review of Image Super-Resolution Approaches Based on Deep Learning and Applications in Remote Sensing
by Xuan Wang, Jinglei Yi, Jian Guo, Yongchao Song, Jun Lyu, Jindong Xu, Weiqing Yan, Jindong Zhao, Qing Cai and Haigen Min
Remote Sens. 2022, 14(21), 5423; https://doi.org/10.3390/rs14215423 - 28 Oct 2022
Cited by 36 | Viewed by 8512
Abstract
At present, with the advance of satellite image processing technology, remote sensing images are becoming more widely used in real scenes. However, due to the limitations of current remote sensing imaging technology and the influence of the external environment, the resolution of remote [...] Read more.
At present, with the advance of satellite image processing technology, remote sensing images are becoming more widely used in real scenes. However, due to the limitations of current remote sensing imaging technology and the influence of the external environment, the resolution of remote sensing images often struggles to meet application requirements. In order to obtain high-resolution remote sensing images, image super-resolution methods are gradually being applied to the recovery and reconstruction of remote sensing images. The use of image super-resolution methods can overcome the current limitations of remote sensing image acquisition systems and acquisition environments, solving the problems of poor-quality remote sensing images, blurred regions of interest, and the requirement for high-efficiency image reconstruction, a research topic that is of significant relevance to image processing. In recent years, there has been tremendous progress made in image super-resolution methods, driven by the continuous development of deep learning algorithms. In this paper, we provide a comprehensive overview and analysis of deep-learning-based image super-resolution methods. Specifically, we first introduce the research background and details of image super-resolution techniques. Second, we present some important works on remote sensing image super-resolution, such as training and testing datasets, image quality and model performance evaluation methods, model design principles, related applications, etc. Finally, we point out some existing problems and future directions in the field of remote sensing image super-resolution. Full article
Show Figures

Figure 1

13 pages, 3476 KiB  
Communication
Real-Time Vehicle Sound Detection System Based on Depthwise Separable Convolution Neural Network and Spectrogram Augmentation
by Chaoyi Wang, Yaozhe Song, Haolong Liu, Huawei Liu, Jianpo Liu, Baoqing Li and Xiaobing Yuan
Remote Sens. 2022, 14(19), 4848; https://doi.org/10.3390/rs14194848 - 28 Sep 2022
Cited by 6 | Viewed by 1799
Abstract
This paper proposes a lightweight model combined with data augmentation for vehicle detection in an intelligent sensor system. Vehicle detection can be considered as a binary classification problem, vehicle or non-vehicle. Deep neural networks have shown high accuracy in audio classification, and convolution [...] Read more.
This paper proposes a lightweight model combined with data augmentation for vehicle detection in an intelligent sensor system. Vehicle detection can be considered as a binary classification problem, vehicle or non-vehicle. Deep neural networks have shown high accuracy in audio classification, and convolution neural networks are widely used for audio feature extraction and audio classification. However, the performance of deep neural networks is highly dependent on the availability of large quantities of training data. Recordings such as tracked vehicles are limited, and data augmentation techniques can be applied to improve the overall detection accuracy. In our case, spectrogram augmentation is applied on the mel spectrogram before extracting the Mel-scale Frequency Cepstral Coefficients (MFCC) features to improve the robustness of the system. Then depthwise separable convolution is applied to the CNN network for model compression and migrated to the hardware platform of the intelligent sensor system. The proposed approach is evaluated on a dataset recorded in the field using intelligent sensor systems with microphones. The final frame-level accuracy achieved was 94.64% for the test recordings and 34% of the parameters were reduced after compression. Full article
Show Figures

Graphical abstract

21 pages, 9620 KiB  
Article
Blind Restoration of Atmospheric Turbulence-Degraded Images Based on Curriculum Learning
by Jie Shu, Chunzhi Xie and Zhisheng Gao
Remote Sens. 2022, 14(19), 4797; https://doi.org/10.3390/rs14194797 - 26 Sep 2022
Cited by 5 | Viewed by 1808
Abstract
Atmospheric turbulence-degraded images in typical practical application scenarios are always disturbed by severe additive noise. Severe additive noise corrupts the prior assumptions of most baseline deconvolution methods. Existing methods either ignore the additive noise term during optimization or perform denoising and deblurring completely [...] Read more.
Atmospheric turbulence-degraded images in typical practical application scenarios are always disturbed by severe additive noise. Severe additive noise corrupts the prior assumptions of most baseline deconvolution methods. Existing methods either ignore the additive noise term during optimization or perform denoising and deblurring completely independently. However, their performances are not high because they do not conform to the prior that multiple degradation factors are tightly coupled. This paper proposes a Noise Suppression-based Restoration Network (NSRN) for turbulence-degraded images, in which the noise suppression module is designed to learn low-rank subspaces from turbulence-degraded images, the attention-based asymmetric U-NET module is designed for blurred-image deconvolution, and the Fine Deep Back-Projection (FDBP) module is used for multi-level feature fusion to reconstruct a sharp image. Furthermore, an improved curriculum learning strategy is proposed, which trains the network gradually to achieve superior performance through a local-to-global, easy-to-difficult learning method. Based on NSRN, we achieve state-of-the-art performance with PSNR of 30.1 dB and SSIM of 0.9 on the simulated dataset and better visual results on the real images. Full article
Show Figures

Figure 1

21 pages, 7901 KiB  
Article
Prediction of Sea Surface Temperature by Combining Interdimensional and Self-Attention with Neural Networks
by Xing Guo, Jianghai He, Biao Wang and Jiaji Wu
Remote Sens. 2022, 14(19), 4737; https://doi.org/10.3390/rs14194737 - 22 Sep 2022
Cited by 6 | Viewed by 2006
Abstract
Sea surface temperature (SST) is one of the most important and widely used physical parameters for oceanography and meteorology. To obtain SST, in addition to direct measurement, remote sensing, and numerical models, a variety of data-driven models have been developed with a wealth [...] Read more.
Sea surface temperature (SST) is one of the most important and widely used physical parameters for oceanography and meteorology. To obtain SST, in addition to direct measurement, remote sensing, and numerical models, a variety of data-driven models have been developed with a wealth of SST data being accumulated. As oceans are comprehensive and complex dynamic systems, the distribution and variation of SST are affected by various factors. To overcome this challenge and improve the prediction accuracy, a multi-variable long short-term memory (LSTM) model is proposed which takes wind speed and air pressure at sea level together with SST as inputs. Furthermore, two attention mechanisms are introduced to optimize the model. An interdimensional attention strategy, which is similar to the positional encoding matrix, is utilized to focus on important historical moments of multi-dimensional input; a self-attention strategy is adopted to smooth the data during the training process. Forty-three-year monthly mean SST and meteorological data from the fifth-generation ECMWF (European Centre for Medium-Range Weather Forecasts) reanalysis (ERA5) are collected to train and test the model for the sea areas around China. The performance of the model is evaluated in terms of different statistical parameters, namely the coefficient of determination, root mean squared error, mean absolute error and mean average percentage error, with a range of 0.9138–0.991, 0.3928–0.8789, 0.3213–0.6803, and 0.1067–0.2336, respectively. The prediction results indicate that it is superior to the LSTM-only model and models taking SST only as input, and confirm that our model is promising for oceanography and meteorology investigation. Full article
Show Figures

Graphical abstract

15 pages, 3892 KiB  
Article
Mode Recognition of Orbital Angular Momentum Based on Attention Pyramid Convolutional Neural Network
by Tan Qu, Zhiming Zhao, Yan Zhang, Jiaji Wu and Zhensen Wu
Remote Sens. 2022, 14(18), 4618; https://doi.org/10.3390/rs14184618 - 15 Sep 2022
Cited by 3 | Viewed by 1544
Abstract
In an effort to address the problem of the insufficient accuracy of existing orbital angular momentum (OAM) detection systems for vortex optical communication, an OAM mode detection technology based on an attention pyramid convolution neural network (AP-CNN) is proposed. By introducing fine-grained image [...] Read more.
In an effort to address the problem of the insufficient accuracy of existing orbital angular momentum (OAM) detection systems for vortex optical communication, an OAM mode detection technology based on an attention pyramid convolution neural network (AP-CNN) is proposed. By introducing fine-grained image classification, the low-level detailed features of the similar light intensity distribution of vortex beam superposition and plane wave interferograms are fully utilized. Using ResNet18 as the backbone of AP-CNN, a dual path structure with an attention pyramid is adopted to detect subtle differences in the light intensity in images. Under different turbulence intensities and transmission distances, the detection accuracy and system bit error rate of basic CNN with three convolution layers and two full connection layers, i.e., ResNet18 and ResNet18, with a specified mapping relationship and AP-CNN, are numerically analyzed. Compared to ResNet18, AP-CNN achieves up to a 7% improvement of accuracy and a 3% reduction of incorrect mode identification in the confusion matrix of superimposed vortex modes. The accuracy of single OAM mode detection based on AP-CNN can be effectively improved by 5.5% compared with ResNet18 at a transmission distance of 2 km in strong atmospheric turbulence. The proposed OAM detection scheme may find important applications in optical communications and remote sensing. Full article
Show Figures

Figure 1

25 pages, 35739 KiB  
Article
An Empirical Study on Retinex Methods for Low-Light Image Enhancement
by Muhammad Tahir Rasheed, Guiyu Guo, Daming Shi, Hufsa Khan and Xiaochun Cheng
Remote Sens. 2022, 14(18), 4608; https://doi.org/10.3390/rs14184608 - 15 Sep 2022
Cited by 11 | Viewed by 3203
Abstract
A key part of interpreting, visualizing, and monitoring the surface conditions of remote-sensing images is enhancing the quality of low-light images. It aims to produce higher contrast, noise-suppressed, and better quality images from the low-light version. Recently, Retinex theory-based enhancement methods have gained [...] Read more.
A key part of interpreting, visualizing, and monitoring the surface conditions of remote-sensing images is enhancing the quality of low-light images. It aims to produce higher contrast, noise-suppressed, and better quality images from the low-light version. Recently, Retinex theory-based enhancement methods have gained a lot of attention because of their robustness. In this study, Retinex-based low-light enhancement methods are compared to other state-of-the-art low-light enhancement methods to determine their generalization ability and computational costs. Different commonly used test datasets covering different content and lighting conditions are used to compare the robustness of Retinex-based methods and other low-light enhancement techniques. Different evaluation metrics are used to compare the results, and an average ranking system is suggested to rank the enhancement methods. Full article
Show Figures

Graphical abstract

22 pages, 6522 KiB  
Article
Towards Robust Semantic Segmentation of Land Covers in Foggy Conditions
by Weipeng Shi, Wenhu Qin and Allshine Chen
Remote Sens. 2022, 14(18), 4551; https://doi.org/10.3390/rs14184551 - 12 Sep 2022
Cited by 5 | Viewed by 1464
Abstract
When conducting land cover classification, it is inevitable to encounter foggy conditions, which degrades the performance by a large margin. Robustness may be reduced by a number of factors, such as aerial images of low quality and ineffective fusion of multimodal representations. Hence, [...] Read more.
When conducting land cover classification, it is inevitable to encounter foggy conditions, which degrades the performance by a large margin. Robustness may be reduced by a number of factors, such as aerial images of low quality and ineffective fusion of multimodal representations. Hence, it is crucial to establish a reliable framework that can robustly understand remote sensing image scenes. Based on multimodal fusion and attention mechanisms, we leverage HRNet to extract underlying features, followed by the Spectral and Spatial Representation Learning Module to extract spectral-spatial representations. A Multimodal Representation Fusion Module is proposed to bridge the gap between heterogeneous modalities which can be fused in a complementary manner. A comprehensive evaluation study of the fog-corrupted Potsdam and Vaihingen test sets demonstrates that the proposed method achieves a mean F1score exceeding 73%, indicating a promising performance compared to State-Of-The-Art methods in terms of robustness. Full article
Show Figures

Figure 1

23 pages, 11204 KiB  
Article
Enhanced Multi-Stream Remote Sensing Spatiotemporal Fusion Network Based on Transformer and Dilated Convolution
by Weisheng Li, Dongwen Cao and Minghao Xiang
Remote Sens. 2022, 14(18), 4544; https://doi.org/10.3390/rs14184544 - 11 Sep 2022
Cited by 3 | Viewed by 1650
Abstract
Remote sensing images with high temporal and spatial resolutions play a crucial role in land surface-change monitoring, vegetation monitoring, and natural disaster mapping. However, existing technical conditions and cost constraints make it very difficult to directly obtain remote sensing images with high temporal [...] Read more.
Remote sensing images with high temporal and spatial resolutions play a crucial role in land surface-change monitoring, vegetation monitoring, and natural disaster mapping. However, existing technical conditions and cost constraints make it very difficult to directly obtain remote sensing images with high temporal and spatial resolution. Consequently, spatiotemporal fusion technology for remote sensing images has attracted considerable attention. In recent years, deep learning-based fusion methods have been developed. In this study, to improve the accuracy and robustness of deep learning models and better extract the spatiotemporal information of remote sensing images, the existing multi-stream remote sensing spatiotemporal fusion network MSNet is improved using dilated convolution and an improved transformer encoder to develop an enhanced version called EMSNet. Dilated convolution is used to extract time information and reduce parameters. The improved transformer encoder is improved to further adapt to image-fusion technology and effectively extract spatiotemporal information. A new weight strategy is used for fusion that substantially improves the prediction accuracy of the model, image quality, and fusion effect. The superiority of the proposed approach is confirmed by comparing it with six representative spatiotemporal fusion algorithms on three disparate datasets. Compared with MSNet, EMSNet improved SSIM by 15.3% on the CIA dataset, ERGAS by 92.1% on the LGC dataset, and RMSE by 92.9% on the AHB dataset. Full article
Show Figures

Graphical abstract

15 pages, 2877 KiB  
Article
Retrieval of Live Fuel Moisture Content Based on Multi-Source Remote Sensing Data and Ensemble Deep Learning Model
by Jiangjian Xie, Tao Qi, Wanjun Hu, Huaguo Huang, Beibei Chen and Junguo Zhang
Remote Sens. 2022, 14(17), 4378; https://doi.org/10.3390/rs14174378 - 3 Sep 2022
Cited by 10 | Viewed by 2665
Abstract
Live fuel moisture content (LFMC) is an important index used to evaluate the wildfire risk and fire spread rate. In order to further improve the retrieval accuracy, two ensemble models combining deep learning models were proposed. One is a stacking ensemble model based [...] Read more.
Live fuel moisture content (LFMC) is an important index used to evaluate the wildfire risk and fire spread rate. In order to further improve the retrieval accuracy, two ensemble models combining deep learning models were proposed. One is a stacking ensemble model based on LSTM, TCN and LSTM-TCN models, and the other is an Adaboost ensemble model based on the LSTM-TCN model. Measured LFMC data, MODIS, Landsat-8, Sentinel-1 remote sensing data and auxiliary data such as canopy height and land cover of the forest-fire-prone areas in the Western United States, were selected for our study, and the retrieval results of different models with different groups of remote sensing data were compared. The results show that using multi-source data can integrate the advantages of different types of remote sensing data, resulting in higher accuracy of LFMC retrieval than that of single-source remote sensing data. The ensemble models can better extract the nonlinear relationship between LFMC and remote sensing data, and the stacking ensemble model with all the MODIS, Landsat-8 and Sentinel-1 remote sensing data achieved the best LFMC retrieval results, with R2  = 0.85, RMSE = 18.88 and ubRMSE = 17.99. The proposed stacking ensemble model is more suitable for LFMC retrieval than the existing method. Full article
Show Figures

Graphical abstract

Back to TopTop