Special Issue "Towards Practical Application of Artificial Intelligence in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (20 October 2021).

Special Issue Editors

Dr. Ziheng Sun
E-Mail Website
Guest Editor
Center for Spatial Information Science and Systems, George Mason University, 4400 University Drive, MSN 6E1, George Mason University, Fairfax, VA 22030, USA
Interests: geospatial cyberinfrastructure; artificial intelligence; machine learning; remote sensing; image recognition; geoinformatics; agricultural drought; high-performance computing
Prof. Liping Di
E-Mail Website
Guest Editor
Center for Spatial Information Science and Systems, George Mason University, 4400 University Drive, MSN 6E1, George Mason University, Fairfax, VA 22030, USA
Interests: earth system science; geospatial information science; agro-geoinformatics; geospatial web service; spatial data infrastructure; geospatial data catalog; interoperability standard; agricultural drought monitoring and forecasting
Special Issues, Collections and Topics in MDPI journals
Dr. Daniel Tong
E-Mail Website
Guest Editor
Department of Atmospheric, Oceanic and Earth Sciences, George Mason University, 4400 University Dr, Fairfax, VA 22030, USA
Interests: satellite remote sensing; atmospheric chemistry and composition; climate analysis; emission assimilation; air quality forecasting/reanalysis; Ozone; PM2.5; dust storm
Dr. Annie Burgess
E-Mail Website
Guest Editor
ESIP Lab
Interests: earth science; geoinformatics; data management

Special Issue Information

Dear Colleagues,

Artificial intelligence is gaining more and more attention in remote sensing today. We have heard many successful stories of using neural networks on remote sensing datasets to solve Earth system science problems. However, practically using them in real-world scenarios is still very challenging and requires advanced computational resources and detailed AI engineering. For instance, the deep stack and huge number of parameters in deep neural network models greatly increases the complexity, resource consumption and entry barrier of applying AI models on remote sensing datasets. The uncertainty of AI methods is also affecting the trustworthiness of the results among stakeholders. This Special Issue will discuss the latest progresses on the full-stack workflow of using AI models on remote-sensed or field-observed geospatial datasets, including satellite imageries, aerial images, ground sensor networks, model simulations, reanalysis, radar data, surveyed tables, etc. The aim is to bring together community experiences to refine our theory and technology on building, integrating, and utilizing AI models to practically address remote sensing-related challenges raised in solving the critical Earth system science problems.  

In order to involve more participants from the entire Earth science community, the scope of this Special Issue “Towards Practical Application of AI in Remote Sensing” will cover a variety of topics, including but not limited to:

  • AI application in land cover/land use classification;
  • AI application in remote sensing-based atmospheric/climate science;
  • AI application in remote sensing-based Earth science;
  • AI application in agricultural remote sensing;
  • AI application in remote sensing-based biology;
  • AI application in remote sensing-based polar science;
  • AI application in remote sensing-based hydrology, oceanology (e.g., waterbody, snow, ice monitoring/prediction);
  • AI application in remote sensing-based natural disaster management;
  • AI application in remote sensing-related geospatial information science;
  • Advanced geospatial cyberinfrastructure for powering practical AI application.

We kindly invite community members to submit your work using state-of-the-art AI techniques in solving critical science problems. All submitted manuscripts will go through a peer review process. We eagerly look forward to receiving your submissions.

All the best,

Dr. Ziheng Sun
Prof. Liping Di
Dr. Daniel Tong
Dr. Annie Burgess
Guest Editors

Related References

If possible, please provide us with some related references in the field.

  1. Sun, Ziheng, Liping Di, Daniel Tong, and Annie Bryant Burgess. "Advanced Geospatial Cyberinfrastructure for Deep Learning Posters." In AGU Fall Meeting 2019. AGU, 2019.
  2. Sun, Ziheng, Daniel Tong. "Deep Learning for Improving Short-Term Atmospheric Modeling and Prediction Posters" In AGU Fall Meeting 2019. AGU, 2019.
  3. Sun, Ziheng, Liping Di, and Hui Fang. "Using long short-term memory recurrent neural network in land cover classification on Landsat and Cropland data layer time series." International journal of remote sensing 40, no. 2 (2019): 593-614.

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Aritificial Intelligence 
  • Geospatial cyberinfrastructure 
  • Neural network 
  • Earth system science
  • Geoinformatics

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Can Neural Networks Forecast Open Field Burning of Crop Residue in Regions with Anthropogenic Management and Control? A Case Study in Northeastern China
Remote Sens. 2021, 13(19), 3988; https://doi.org/10.3390/rs13193988 - 05 Oct 2021
Viewed by 311
Abstract
Open burning is often used to remove crop residue during the harvest season. Despite a series of regulations by the Chinese government, the open burning of crop residue still frequently occurs in China, and the monitoring and forecasting crop fires have become a [...] Read more.
Open burning is often used to remove crop residue during the harvest season. Despite a series of regulations by the Chinese government, the open burning of crop residue still frequently occurs in China, and the monitoring and forecasting crop fires have become a topic of active research. In this paper, crop fires in Northeastern China were forecasted using an artificial neural network (ANN) based on moderate-resolution imaging spectroradiometer (MODIS) satellite fire data from 2013–2020. Both natural factors (meteorological, soil moisture content, harvest date) and anthropogenic factors were considered. The model’s forecasting accuracy under natural factors reached 77.01% during 2013–2017. When considering the influence of anthropogenic management and control policies, such as the straw open burning prohibition areas in Jilin Province, the accuracy of the forecast results for 2020 was reduced to 60%. Although the forecasting accuracy was lower than for natural factors, the relative error between the observed fire points and the back propagation neural network (BPNN) forecasting results was acceptable. In terms of influencing factors, air pressure, the change in soil moisture content in a 24 h period and the daily soil moisture content were significantly correlated with open burning. The results of this study improve our ability to forecast agricultural fires and provide a scientific framework for regional prevention and control of crop residue burning. Full article
Show Figures

Graphical abstract

Article
Deriving Non-Cloud Contaminated Sentinel-2 Images with RGB and Near-Infrared Bands from Sentinel-1 Images Based on a Conditional Generative Adversarial Network
Remote Sens. 2021, 13(8), 1512; https://doi.org/10.3390/rs13081512 - 14 Apr 2021
Viewed by 578
Abstract
Sentinel-2 images have been widely used in studying land surface phenomena and processes, but they inevitably suffer from cloud contamination. To solve this critical optical data availability issue, it is ideal to fuse Sentinel-1 and Sentinel-2 images to create fused, cloud-free Sentinel-2-like images [...] Read more.
Sentinel-2 images have been widely used in studying land surface phenomena and processes, but they inevitably suffer from cloud contamination. To solve this critical optical data availability issue, it is ideal to fuse Sentinel-1 and Sentinel-2 images to create fused, cloud-free Sentinel-2-like images for facilitating land surface applications. In this paper, we propose a new data fusion model, the Multi-channels Conditional Generative Adversarial Network (MCcGAN), based on the conditional generative adversarial network, which is able to convert images from Domain A to Domain B. With the model, we were able to generate fused, cloud-free Sentinel-2-like images for a target date by using a pair of reference Sentinel-1/Sentinel-2 images and target-date Sentinel-1 images as inputs. In order to demonstrate the superiority of our method, we also compared it with other state-of-the-art methods using the same data. To make the evaluation more objective and reliable, we calculated the root-mean-square-error (RSME), R2, Kling–Gupta efficiency (KGE), structural similarity index (SSIM), spectral angle mapper (SAM), and peak signal-to-noise ratio (PSNR) of the simulated Sentinel-2 images generated by different methods. The results show that the simulated Sentinel-2 images generated by the MCcGAN have a higher quality and accuracy than those produced via the previous methods. Full article
Show Figures

Graphical abstract

Article
Rotation Invariance Regularization for Remote Sensing Image Scene Classification with Convolutional Neural Networks
Remote Sens. 2021, 13(4), 569; https://doi.org/10.3390/rs13040569 - 05 Feb 2021
Cited by 4 | Viewed by 1116
Abstract
Deep convolutional neural networks (DCNNs) have shown significant improvements in remote sensing image scene classification for powerful feature representations. However, because of the high variance and volume limitations of the available remote sensing datasets, DCNNs are prone to overfit the data used for [...] Read more.
Deep convolutional neural networks (DCNNs) have shown significant improvements in remote sensing image scene classification for powerful feature representations. However, because of the high variance and volume limitations of the available remote sensing datasets, DCNNs are prone to overfit the data used for their training. To address this problem, this paper proposes a novel scene classification framework based on a deep Siamese convolutional network with rotation invariance regularization. Specifically, we design a data augmentation strategy for the Siamese model to learn a rotation invariance DCNN model that is achieved by directly enforcing the labels of the training samples before and after rotating to be mapped close to each other. In addition to the cross-entropy cost function for the traditional CNN models, we impose a rotation invariance regularization constraint on the objective function of our proposed model. The experimental results obtained using three publicly-available scene classification datasets show that the proposed method can generally improve the classification performance by 2~3% and achieves satisfactory classification performance compared with some state-of-the-art methods. Full article
Show Figures

Graphical abstract

Article
An Adversarial Generative Network for Crop Classification from Remote Sensing Timeseries Images
Remote Sens. 2021, 13(1), 65; https://doi.org/10.3390/rs13010065 - 26 Dec 2020
Cited by 3 | Viewed by 1375
Abstract
Due to the increasing demand for the monitoring of crop conditions and food production, it is a challenging and meaningful task to identify crops from remote sensing images. The state-of the-art crop classification models are mostly built on supervised classification models such as [...] Read more.
Due to the increasing demand for the monitoring of crop conditions and food production, it is a challenging and meaningful task to identify crops from remote sensing images. The state-of the-art crop classification models are mostly built on supervised classification models such as support vector machines (SVM), convolutional neural networks (CNN), and long- and short-term memory neural networks (LSTM). Meanwhile, as an unsupervised generative model, the adversarial generative network (GAN) is rarely used to complete classification tasks for agricultural applications. In this work, we propose a new method that combines GAN, CNN, and LSTM models to classify crops of corn and soybeans from remote sensing time-series images, in which GAN’s discriminator was used as the final classifier. The method is feasible on the condition that the training samples are small, and it fully takes advantage of spectral, spatial, and phenology features of crops from satellite data. The classification experiments were conducted on crops of corn, soybeans, and others. To verify the effectiveness of the proposed method, comparisons with models of SVM, SegNet, CNN, LSTM, and different combinations were also conducted. The results show that our method achieved the best classification results, with the Kappa coefficient of 0.7933 and overall accuracy of 0.86. Experiments in other study areas also demonstrate the extensibility of the proposed method. Full article
Show Figures

Figure 1

Article
Estimating Crop LAI Using Spectral Feature Extraction and the Hybrid Inversion Method
Remote Sens. 2020, 12(21), 3534; https://doi.org/10.3390/rs12213534 - 28 Oct 2020
Cited by 4 | Viewed by 930
Abstract
The leaf area index (LAI) is an essential indicator used in crop growth monitoring. In the study, a hybrid inversion method, which combined a physical model with a statistical method, was proposed to estimate the crop LAI. The simulated compact high-resolution imaging spectrometer [...] Read more.
The leaf area index (LAI) is an essential indicator used in crop growth monitoring. In the study, a hybrid inversion method, which combined a physical model with a statistical method, was proposed to estimate the crop LAI. The simulated compact high-resolution imaging spectrometer (CHRIS) canopy spectral crop reflectance datasets were generated using the PROSAIL model (the coupling of PROSPECT leaf optical properties model and Scattering by Arbitrarily Inclined Leaves model) and the CHRIS band response function. Partial least squares (PLS) was then used to reduce the dimension of the simulated spectral data. Using the principal components (PCs) of PLS as the model inputs, the hybrid inversion models were built using various modeling algorithms, including the backpropagation artificial neural network (BP-ANN), least squares support vector regression (LS-SVR), and random forest regression (RFR). Finally, remote sensing mapping of the CHRIS data was achieved with the hybrid model to test the inversion accuracy of LAI estimates. The validation result yielded an accuracy of R2 = 0.939 and normalized root-mean-square error (NRMSE) = 6.474% for the PLS_RFR model, which indicated that the crops LAI could be estimated accurately by using spectral feature extraction and a hybrid inversion strategy. The results showed that the model based on principal components extracted by PLS had a good estimation accuracy and noise immunity and was the preferred method for LAI estimation. Furthermore, the comparative analysis results of various datasets showed that prior knowledge could improve the precision of the retrieved LAI, and using this information to constrain parameters (e.g., chlorophyll content or LAI), which make important contributions to the spectra, is the key to this improvement. In addition, among the PLS, BP-ANN, LS-SVR, and RFR methods, RFR was the optimal modeling algorithm in the paper, as indicated by the high R2 and low NRMSE in various datasets. Full article
Show Figures

Figure 1

Article
Missing Pixel Reconstruction on Landsat 8 Analysis Ready Data Land Surface Temperature Image Patches Using Source-Augmented Partial Convolution
Remote Sens. 2020, 12(19), 3143; https://doi.org/10.3390/rs12193143 - 24 Sep 2020
Viewed by 2091
Abstract
Missing pixels is a common issue in satellite images. Taking Landsat 8 Analysis Ready Data (ARD) Land Surface Temperature (LST) image as an example, the Source-Augmented Partial Convolution v2 model (SAPC2) is developed to reconstruct missing pixels in the target LST image with [...] Read more.
Missing pixels is a common issue in satellite images. Taking Landsat 8 Analysis Ready Data (ARD) Land Surface Temperature (LST) image as an example, the Source-Augmented Partial Convolution v2 model (SAPC2) is developed to reconstruct missing pixels in the target LST image with the assistance of a collocated complete source image. SAPC2 utilizes the partial convolution enabled U-Net as its framework and accommodates the source into the framework by: (1) performing the shared partial convolution on both the source and the target in encoders; and (2) merging the source and the target by using the partial merge layer to create complete skip connection images for the corresponding decoders. The optimized SAPC2 shows superior performance to four baseline models (i.e., SAPC1, SAPC2-OPC, SAPC2-SC, and STS-CNN) in terms of nine validation metrics. For example, the masked MSE of SAPC2 is 7%, 20%, 44%, and 59% lower than that of the four baseline models. On the six scrutinized cases, the repaired target images generated by SAPC2 have the fewest artifacts near the mask boundary and the best recovery of color scales and fine textures compared with the four baseline models. Full article
Show Figures

Graphical abstract

Article
LS-SSDD-v1.0: A Deep Learning Dataset Dedicated to Small Ship Detection from Large-Scale Sentinel-1 SAR Images
Remote Sens. 2020, 12(18), 2997; https://doi.org/10.3390/rs12182997 - 15 Sep 2020
Cited by 18 | Viewed by 2504
Abstract
Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, [...] Read more.
Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, there is still a lack of a reliable deep learning SAR ship detection dataset that can meet the practical migration application of ship detection in large-scene space-borne SAR images. Thus, to solve this problem, this paper releases a Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) from Sentinel-1, for small ship detection under large-scale backgrounds. LS-SSDD-v1.0 contains 15 large-scale SAR images whose ground truths are correctly labeled by SAR experts by drawing support from the Automatic Identification System (AIS) and Google Earth. To facilitate network training, the large-scale images are directly cut into 9000 sub-images without bells and whistles, providing convenience for subsequent detection result presentation in large-scale SAR images. Notably, LS-SSDD-v1.0 has five advantages: (1) large-scale backgrounds, (2) small ship detection, (3) abundant pure backgrounds, (4) fully automatic detection flow, and (5) numerous and standardized research baselines. Last but not least, combined with the advantage of abundant pure backgrounds, we also propose a Pure Background Hybrid Training mechanism (PBHT-mechanism) to suppress false alarms of land in large-scale SAR images. Experimental results of ablation study can verify the effectiveness of the PBHT-mechanism. LS-SSDD-v1.0 can inspire related scholars to make extensive research into SAR ship detection methods with engineering application value, which is conducive to the progress of SAR intelligent interpretation technology. Full article
Show Figures

Graphical abstract

Article
Automated Procurement of Training Data for Machine Learning Algorithm on Ship Detection Using AIS Information
Remote Sens. 2020, 12(9), 1443; https://doi.org/10.3390/rs12091443 - 02 May 2020
Cited by 9 | Viewed by 2306
Abstract
Development of convolutional neural network (CNN) optimized for object detection, led to significant developments in ship detection. Although training data critically affect the performance of the CNN-based training model, previous studies focused mostly on enhancing the architecture of the training model. This study [...] Read more.
Development of convolutional neural network (CNN) optimized for object detection, led to significant developments in ship detection. Although training data critically affect the performance of the CNN-based training model, previous studies focused mostly on enhancing the architecture of the training model. This study developed a sophisticated and automatic methodology to generate verified and robust training data by employing synthetic aperture radar (SAR) images and automatic identification system (AIS) data. The extraction of training data initiated from interpolating the discretely received AIS positions to the exact position of the ship at the time of image acquisition. The interpolation was conducted by applying a Kalman filter, followed by compensating the Doppler frequency shift. The bounding box for the ship was constructed tightly considering the installation of the AIS equipment and the exact size of the ship. From 18 Sentinel-1 SAR images using a completely automated procedure, 7489 training data were obtained, compared with a different set of training data from visual interpretation. The ship detection model trained using the automatic training data obtained 0.7713 of overall detection performance from 3 Sentinel-1 SAR images, which exceeded that of manual training data, evading the artificial structures of harbors and azimuth ambiguity ghost signals from detection. Full article
Show Figures

Graphical abstract

Review

Jump to: Research

Review
SAR Ship Detection Dataset (SSDD): Official Release and Comprehensive Data Analysis
Remote Sens. 2021, 13(18), 3690; https://doi.org/10.3390/rs13183690 - 15 Sep 2021
Cited by 2 | Viewed by 820
Abstract
SAR Ship Detection Dataset (SSDD) is the first open dataset that is widely used to research state-of-the-art technology of ship detection from Synthetic Aperture Radar (SAR) imagery based on deep learning (DL). According to our investigation, up to 46.59% of the total 161 [...] Read more.
SAR Ship Detection Dataset (SSDD) is the first open dataset that is widely used to research state-of-the-art technology of ship detection from Synthetic Aperture Radar (SAR) imagery based on deep learning (DL). According to our investigation, up to 46.59% of the total 161 public reports confidently select SSDD to study DL-based SAR ship detection. Undoubtedly, this situation reveals the popularity and great influence of SSDD in the SAR remote sensing community. Nevertheless, the coarse annotations and ambiguous standards of use of its initial version both hinder fair methodological comparisons and effective academic exchanges. Additionally, its single-function horizontal-vertical rectangle bounding box (BBox) labels can no longer satisfy the current research needs of the rotatable bounding box (RBox) task and the pixel-level polygon segmentation task. Therefore, to address the above two dilemmas, in this review, advocated by the publisher of SSDD, we will make an official release of SSDD based on its initial version. SSDD’s official release version will cover three types: (1) a bounding box SSDD (BBox-SSDD), (2) a rotatable bounding box SSDD (RBox-SSDD), and (3) a polygon segmentation SSDD (PSeg-SSDD). We relabel ships in SSDD more carefully and finely, and then explicitly formulate some strict using standards, e.g., (1) the training-test division determination, (2) the inshore-offshore protocol, (3) the ship-size reasonable definition, (4) the determination of the densely distributed small ship samples, and (5) the determination of the densely parallel berthing at ports ship samples. These using standards are all formulated objectively based on the using differences of existing 75 (161 × 46.59%) public reports. They will be beneficial for fair method comparison and effective academic exchanges in the future. Most notably, we conduct a comprehensive data analysis on BBox-SSDD, RBox-SSDD, and PSeg-SSDD. Our analysis results can provide some valuable suggestions for possible future scholars to further elaborately design DL-based SAR ship detectors with higher accuracy and stronger robustness when using SSDD. Full article
Show Figures

Graphical abstract

Back to TopTop