remotesensing-logo

Journal Browser

Journal Browser

Innovative Application of AI in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (30 September 2021) | Viewed by 22426

Special Issue Editors


E-Mail Website
Guest Editor
National Chiao Tung University, Taiwan
Interests: VLSI bio-medical microsystems; neural networks and intelligent systems; multimedia signal processing; wireless communication; sensor networks; space integrated avionic systems
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Beijing Jiaotong University, China
Interests: network security; computer network security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Satellites have been flying around the Earth for decades now to scan landscapes and capture images in this ever-changing planet. Remote sensing is not a new element of the art and science of observing things from afar, but recent innovations with the application of artificial intelligence are very powerful and can potentially contribute a lot in society. At present, we are at a pivotal stage of a radical transformation regarding where information comes from and how it is analyzed and monetized. The application of artificial intelligence to remote sensing is the best and highest-quality technology in the classification of satellite global imagery that can help in overcoming the planets’ greatest challenges. Advantages in using clear and reliable satellite imagery include providing a wide image of the entire planet that can be used in helping society to predict climate change, prevent wars, stop forest fires, as well as solving the biggest questions and finding solutions in high resolution. Reducing costs is possible by replacing or optimizing the existing monitoring systems with the application of artificial intelligence in remote sensing.

This Special Issue aims to help to unlock the ability of satellite data using artificial intelligence in remote sensing. It will provide ideas about developing models that extract features, detecting changes and predicting physical situations using artificial intelligence.

Dr. Wai Chi Fang
Prof. Dr. Sabah Mohammed
Prof. Mincong Tang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Application of AI in remote sensing
  • Challenges using artificial intelligence
  • Future strategies in remote Sensing
  • Advantages of AI in remote sensing
  • Significant impact of remote sensing
  • Satellite data and artificial intelligence
  • Powerful AI in remote sensing
  • How AI affects remote sensing
  • Demand of modern satellites
  • The changing planet satellites
  • Trends in remote sensing
  • Using AI in remote sensing for safety purposes
  • Latest remote sensing applications
  • Importance of AI in remote sensing
  • Effectiveness of remote sensing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

17 pages, 3252 KiB  
Article
Application of Random Forest Algorithm for Merging Multiple Satellite Precipitation Products across South Korea
by Giang V. Nguyen, Xuan-Hien Le, Linh Nguyen Van, Sungho Jung, Minho Yeon and Giha Lee
Remote Sens. 2021, 13(20), 4033; https://doi.org/10.3390/rs13204033 - 09 Oct 2021
Cited by 23 | Viewed by 3326
Abstract
Precipitation is a crucial component of the water cycle and plays a key role in hydrological processes. Recently, satellite-based precipitation products (SPPs) have provided grid-based precipitation with spatiotemporal variability. However, SPPs contain a lot of uncertainty in estimated precipitation, and the spatial resolution [...] Read more.
Precipitation is a crucial component of the water cycle and plays a key role in hydrological processes. Recently, satellite-based precipitation products (SPPs) have provided grid-based precipitation with spatiotemporal variability. However, SPPs contain a lot of uncertainty in estimated precipitation, and the spatial resolution of these products is still relatively coarse. To overcome these limitations, this study aims to generate new grid-based daily precipitation based on a combination of rainfall observation data with multiple SPPs for the period of 2003–2017 across South Korea. A Random Forest (RF) machine-learning algorithm model was applied for producing a new merged precipitation product. In addition, several statistical linear merging methods have been adopted to compare with the results achieved from the RF model. To investigate the efficiency of RF, rainfall data from 64 observed Automated Synoptic Observation System (ASOS) installations were collected to analyze the accuracy of products through several continuous as well as categorical indicators. The new precipitation values produced by the merging procedure generally not only report higher accuracy than a single satellite rainfall product but also indicate that RF is more effective than the statistical merging method. Thus, the achievements from this study point out that the RF model might be applied for merging multiple satellite precipitation products, especially in sparse region areas. Full article
(This article belongs to the Special Issue Innovative Application of AI in Remote Sensing)
Show Figures

Figure 1

32 pages, 3422 KiB  
Article
Hyperspectral Dimensionality Reduction Based on Inter-Band Redundancy Analysis and Greedy Spectral Selection
by Giorgio Morales, John W. Sheppard, Riley D. Logan and Joseph A. Shaw
Remote Sens. 2021, 13(18), 3649; https://doi.org/10.3390/rs13183649 - 13 Sep 2021
Cited by 12 | Viewed by 3990
Abstract
Hyperspectral imaging systems are becoming widely used due to their increasing accessibility and their ability to provide detailed spectral responses based on hundreds of spectral bands. However, the resulting hyperspectral images (HSIs) come at the cost of increased storage requirements, increased computational time [...] Read more.
Hyperspectral imaging systems are becoming widely used due to their increasing accessibility and their ability to provide detailed spectral responses based on hundreds of spectral bands. However, the resulting hyperspectral images (HSIs) come at the cost of increased storage requirements, increased computational time to process, and highly redundant data. Thus, dimensionality reduction techniques are necessary to decrease the number of spectral bands while retaining the most useful information. Our contribution is two-fold: First, we propose a filter-based method called interband redundancy analysis (IBRA) based on a collinearity analysis between a band and its neighbors. This analysis helps to remove redundant bands and dramatically reduces the search space. Second, we apply a wrapper-based approach called greedy spectral selection (GSS) to the results of IBRA to select bands based on their information entropy values and train a compact convolutional neural network to evaluate the performance of the current selection. We also propose a feature extraction framework that consists of two main steps: first, it reduces the total number of bands using IBRA; then, it can use any feature extraction method to obtain the desired number of feature channels. We present classification results obtained from our methods and compare them to other dimensionality reduction methods on three hyperspectral image datasets. Additionally, we used the original hyperspectral data cube to simulate the process of using actual filters in a multispectral imager. Full article
(This article belongs to the Special Issue Innovative Application of AI in Remote Sensing)
Show Figures

Graphical abstract

18 pages, 7417 KiB  
Article
Deep Learning Network Intensification for Preventing Noisy-Labeled Samples for Remote Sensing Classification
by Chuang Lin, Shanxin Guo, Jinsong Chen, Luyi Sun, Xiaorou Zheng, Yan Yang and Yingfei Xiong
Remote Sens. 2021, 13(9), 1689; https://doi.org/10.3390/rs13091689 - 27 Apr 2021
Cited by 6 | Viewed by 1954
Abstract
The deep-learning-network performance depends on the accuracy of the training samples. The training samples are commonly labeled by human visual investigation or inherited from historical land-cover or land-use maps, which usually contain label noise, depending on subjective knowledge and the time of the [...] Read more.
The deep-learning-network performance depends on the accuracy of the training samples. The training samples are commonly labeled by human visual investigation or inherited from historical land-cover or land-use maps, which usually contain label noise, depending on subjective knowledge and the time of the historical map. Helping the network to distinguish noisy labels during the training process is a prerequisite for applying the model for training across time and locations. This study proposes an antinoise framework, the Weight Loss Network (WLN), to achieve this goal. The WLN contains three main parts: (1) the segmentation subnetwork, which any state-of-the-art segmentation network can replace; (2) the attention subnetwork (λ); and (3) the class-balance coefficient (α). Four types of label noise (an insufficient label, redundant label, missing label and incorrect label) were simulated by dilate and erode processing to test the network’s antinoise ability. The segmentation task was set to extract buildings from the Inria Aerial Image Labeling Dataset, which includes Austin, Chicago, Kitsap County, Western Tyrol and Vienna. The network’s performance was evaluated by comparing it with the original U-Net model by adding noisy training samples with different noise rates and noise levels. The result shows that the proposed antinoise framework (WLN) can maintain high accuracy, while the accuracy of the U-Net model dropped. Specifically, after adding 50% of dilated-label samples at noise level 3, the U-Net model’s accuracy dropped by 12.7% for OA, 20.7% for the Mean Intersection over Union (MIOU) and 13.8% for Kappa scores. By contrast, the accuracy of the WLN dropped by 0.2% for OA, 0.3% for the MIOU and 0.8% for Kappa scores. For eroded-label samples at the same level, the accuracy of the U-Net model dropped by 8.4% for OA, 24.2% for the MIOU and 43.3% for Kappa scores, while the accuracy of the WLN dropped by 4.5% for OA, 4.7% for the MIOU and 0.5% for Kappa scores. This result shows that the antinoise framework proposed in this paper can help current segmentation models to avoid the impact of noisy training labels and has the potential to be trained by a larger remote sensing image set regardless of the inner label error. Full article
(This article belongs to the Special Issue Innovative Application of AI in Remote Sensing)
Show Figures

Figure 1

Review

Jump to: Research, Other

30 pages, 3485 KiB  
Review
Application of Deep Learning Architectures for Satellite Image Time Series Prediction: A Review
by Waytehad Rose Moskolaï, Wahabou Abdou, Albert Dipanda and Kolyang
Remote Sens. 2021, 13(23), 4822; https://doi.org/10.3390/rs13234822 - 27 Nov 2021
Cited by 25 | Viewed by 9389
Abstract
Satellite image time series (SITS) is a sequence of satellite images that record a given area at several consecutive times. The aim of such sequences is to use not only spatial information but also the temporal dimension of the data, which is used [...] Read more.
Satellite image time series (SITS) is a sequence of satellite images that record a given area at several consecutive times. The aim of such sequences is to use not only spatial information but also the temporal dimension of the data, which is used for multiple real-world applications, such as classification, segmentation, anomaly detection, and prediction. Several traditional machine learning algorithms have been developed and successfully applied to time series for predictions. However, these methods have limitations in some situations, thus deep learning (DL) techniques have been introduced to achieve the best performance. Reviews of machine learning and DL methods for time series prediction problems have been conducted in previous studies. However, to the best of our knowledge, none of these surveys have addressed the specific case of works using DL techniques and satellite images as datasets for predictions. Therefore, this paper concentrates on the DL applications for SITS prediction, giving an overview of the main elements used to design and evaluate the predictive models, namely the architectures, data, optimization functions, and evaluation metrics. The reviewed DL-based models are divided into three categories, namely recurrent neural network-based models, hybrid models, and feed-forward-based models (convolutional neural networks and multi-layer perceptron). The main characteristics of satellite images and the major existing applications in the field of SITS prediction are also presented in this article. These applications include weather forecasting, precipitation nowcasting, spatio-temporal analysis, and missing data reconstruction. Finally, current limitations and proposed workable solutions related to the use of DL for SITS prediction are also highlighted. Full article
(This article belongs to the Special Issue Innovative Application of AI in Remote Sensing)
Show Figures

Figure 1

Other

Jump to: Research, Review

18 pages, 4597 KiB  
Technical Note
Fast and Accurate Terrain Image Classification for ASTER Remote Sensing by Data Stream Mining and Evolutionary-EAC Instance-Learning-Based Algorithm
by Shimin Hu, Simon Fong, Lili Yang, Shuang-Hua Yang, Nilanjan Dey, Richard C. Millham and Jinan Fiaidhi
Remote Sens. 2021, 13(6), 1123; https://doi.org/10.3390/rs13061123 - 16 Mar 2021
Cited by 4 | Viewed by 2145
Abstract
Remote sensing streams continuous data feed from the satellite to ground station for data analysis. Often the data analytics involves analyzing data in real-time, such as emergency control, surveillance of military operations or scenarios that change rapidly. Traditional data mining requires all the [...] Read more.
Remote sensing streams continuous data feed from the satellite to ground station for data analysis. Often the data analytics involves analyzing data in real-time, such as emergency control, surveillance of military operations or scenarios that change rapidly. Traditional data mining requires all the data to be available prior to inducing a model by supervised learning, for automatic image recognition or classification. Any new update on the data prompts the model to be built again by loading in all the previous and new data. Therefore, the training time will increase indefinitely making it unsuitable for real-time application in remote sensing. As a contribution to solving this problem, a new approach of data analytics for remote sensing for data stream mining is formulated and reported in this paper. Fresh data feed collected from afar is used to approximate an image recognition model without reloading the history, which helps eliminate the latency in building the model again and again. In the past, data stream mining has a drawback in approximating a classification model with a sufficiently high level of accuracy. This is due to the one-pass incremental learning mechanism inherently exists in the design of the data stream mining algorithm. In order to solve this problem, a novel streamlined sensor data processing method is proposed called evolutionary expand-and-contract instance-based learning algorithm (EEAC-IBL). The multivariate data stream is first expanded into many subspaces, and then the subspaces, which are corresponding to the characteristics of the features are selected and condensed into a significant feature subset. The selection operates stochastically instead of deterministically by evolutionary optimization, which approximates the best subgroup. Followed by data stream mining, the model learning for image recognition is done on the fly. This stochastic approximation method is fast and accurate, offering an alternative to the traditional machine learning method for image recognition application in remote sensing. Our experimental results show computing advantages over other classical approaches, with a mean accuracy improvement at 16.62%. Full article
(This article belongs to the Special Issue Innovative Application of AI in Remote Sensing)
Show Figures

Figure 1

Back to TopTop