Special Issue "Towards Practical Application of Artificial Intelligence in Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: 31 May 2021.

Special Issue Editors

Dr. Ziheng Sun
Website
Guest Editor
Center for Spatial Information Science and Systems, George Mason University, 4400 University Drive, MSN 6E1, George Mason University, Fairfax, VA 22030, USA
Interests: geospatial cyberinfrastructure; artificial intelligence; machine learning; remote sensing; image recognition; geoinformatics; agricultural drought; high-performance computing
Prof. Liping Di
Website
Guest Editor
Center for Spatial Information Science and Systems, George Mason University, 4400 University Drive, MSN 6E1, George Mason University, Fairfax, VA 22030, USA
Interests: earth system science; geospatial information science; agro-geoinformatics; geospatial web service; spatial data infrastructure; geospatial data catalog; interoperability standard; agricultural drought monitoring and forecasting
Special Issues and Collections in MDPI journals
Dr. Daniel Tong
Website
Guest Editor
Department of Atmospheric, Oceanic and Earth Sciences, George Mason University, 4400 University Dr, Fairfax, VA 22030, USA
Interests: satellite remote sensing; atmospheric chemistry and composition; climate analysis; emission assimilation; air quality forecasting/reanalysis; Ozone; PM2.5; dust storm
Dr. Annie Burgess
Website
Guest Editor
ESIP Lab
Interests: earth science; geoinformatics; data management

Special Issue Information

Dear Colleagues,

Artificial intelligence is gaining more and more attention in remote sensing today. We have heard many successful stories of using neural networks on remote sensing datasets to solve Earth system science problems. However, practically using them in real-world scenarios is still very challenging and requires advanced computational resources and detailed AI engineering. For instance, the deep stack and huge number of parameters in deep neural network models greatly increases the complexity, resource consumption and entry barrier of applying AI models on remote sensing datasets. The uncertainty of AI methods is also affecting the trustworthiness of the results among stakeholders. This Special Issue will discuss the latest progresses on the full-stack workflow of using AI models on remote-sensed or field-observed geospatial datasets, including satellite imageries, aerial images, ground sensor networks, model simulations, reanalysis, radar data, surveyed tables, etc. The aim is to bring together community experiences to refine our theory and technology on building, integrating, and utilizing AI models to practically address remote sensing-related challenges raised in solving the critical Earth system science problems.  

In order to involve more participants from the entire Earth science community, the scope of this Special Issue “Towards Practical Application of AI in Remote Sensing” will cover a variety of topics, including but not limited to:

  • AI application in land cover/land use classification;
  • AI application in remote sensing-based atmospheric/climate science;
  • AI application in remote sensing-based Earth science;
  • AI application in agricultural remote sensing;
  • AI application in remote sensing-based biology;
  • AI application in remote sensing-based polar science;
  • AI application in remote sensing-based hydrology, oceanology (e.g., waterbody, snow, ice monitoring/prediction);
  • AI application in remote sensing-based natural disaster management;
  • AI application in remote sensing-related geospatial information science;
  • Advanced geospatial cyberinfrastructure for powering practical AI application.

We kindly invite community members to submit your work using state-of-the-art AI techniques in solving critical science problems. All submitted manuscripts will go through a peer review process. We eagerly look forward to receiving your submissions.

All the best,

Dr. Ziheng Sun
Prof. Liping Di
Dr. Daniel Tong
Dr. Annie Burgess
Guest Editors

Related References

If possible, please provide us with some related references in the field.

  1. Sun, Ziheng, Liping Di, Daniel Tong, and Annie Bryant Burgess. "Advanced Geospatial Cyberinfrastructure for Deep Learning Posters." In AGU Fall Meeting 2019. AGU, 2019.
  2. Sun, Ziheng, Daniel Tong. "Deep Learning for Improving Short-Term Atmospheric Modeling and Prediction Posters" In AGU Fall Meeting 2019. AGU, 2019.
  3. Sun, Ziheng, Liping Di, and Hui Fang. "Using long short-term memory recurrent neural network in land cover classification on Landsat and Cropland data layer time series." International journal of remote sensing 40, no. 2 (2019): 593-614.

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Aritificial Intelligence 
  • Geospatial cyberinfrastructure 
  • Neural network 
  • Earth system science
  • Geoinformatics

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Estimating Crop LAI Using Spectral Feature Extraction and the Hybrid Inversion Method
Remote Sens. 2020, 12(21), 3534; https://doi.org/10.3390/rs12213534 - 28 Oct 2020
Abstract
The leaf area index (LAI) is an essential indicator used in crop growth monitoring. In the study, a hybrid inversion method, which combined a physical model with a statistical method, was proposed to estimate the crop LAI. The simulated compact high-resolution imaging spectrometer [...] Read more.
The leaf area index (LAI) is an essential indicator used in crop growth monitoring. In the study, a hybrid inversion method, which combined a physical model with a statistical method, was proposed to estimate the crop LAI. The simulated compact high-resolution imaging spectrometer (CHRIS) canopy spectral crop reflectance datasets were generated using the PROSAIL model (the coupling of PROSPECT leaf optical properties model and Scattering by Arbitrarily Inclined Leaves model) and the CHRIS band response function. Partial least squares (PLS) was then used to reduce the dimension of the simulated spectral data. Using the principal components (PCs) of PLS as the model inputs, the hybrid inversion models were built using various modeling algorithms, including the backpropagation artificial neural network (BP-ANN), least squares support vector regression (LS-SVR), and random forest regression (RFR). Finally, remote sensing mapping of the CHRIS data was achieved with the hybrid model to test the inversion accuracy of LAI estimates. The validation result yielded an accuracy of R2 = 0.939 and normalized root-mean-square error (NRMSE) = 6.474% for the PLS_RFR model, which indicated that the crops LAI could be estimated accurately by using spectral feature extraction and a hybrid inversion strategy. The results showed that the model based on principal components extracted by PLS had a good estimation accuracy and noise immunity and was the preferred method for LAI estimation. Furthermore, the comparative analysis results of various datasets showed that prior knowledge could improve the precision of the retrieved LAI, and using this information to constrain parameters (e.g., chlorophyll content or LAI), which make important contributions to the spectra, is the key to this improvement. In addition, among the PLS, BP-ANN, LS-SVR, and RFR methods, RFR was the optimal modeling algorithm in the paper, as indicated by the high R2 and low NRMSE in various datasets. Full article
Show Figures

Figure 1

Open AccessArticle
Missing Pixel Reconstruction on Landsat 8 Analysis Ready Data Land Surface Temperature Image Patches Using Source-Augmented Partial Convolution
Remote Sens. 2020, 12(19), 3143; https://doi.org/10.3390/rs12193143 - 24 Sep 2020
Abstract
Missing pixels is a common issue in satellite images. Taking Landsat 8 Analysis Ready Data (ARD) Land Surface Temperature (LST) image as an example, the Source-Augmented Partial Convolution v2 model (SAPC2) is developed to reconstruct missing pixels in the target LST image with [...] Read more.
Missing pixels is a common issue in satellite images. Taking Landsat 8 Analysis Ready Data (ARD) Land Surface Temperature (LST) image as an example, the Source-Augmented Partial Convolution v2 model (SAPC2) is developed to reconstruct missing pixels in the target LST image with the assistance of a collocated complete source image. SAPC2 utilizes the partial convolution enabled U-Net as its framework and accommodates the source into the framework by: (1) performing the shared partial convolution on both the source and the target in encoders; and (2) merging the source and the target by using the partial merge layer to create complete skip connection images for the corresponding decoders. The optimized SAPC2 shows superior performance to four baseline models (i.e., SAPC1, SAPC2-OPC, SAPC2-SC, and STS-CNN) in terms of nine validation metrics. For example, the masked MSE of SAPC2 is 7%, 20%, 44%, and 59% lower than that of the four baseline models. On the six scrutinized cases, the repaired target images generated by SAPC2 have the fewest artifacts near the mask boundary and the best recovery of color scales and fine textures compared with the four baseline models. Full article
Show Figures

Graphical abstract

Open AccessArticle
LS-SSDD-v1.0: A Deep Learning Dataset Dedicated to Small Ship Detection from Large-Scale Sentinel-1 SAR Images
Remote Sens. 2020, 12(18), 2997; https://doi.org/10.3390/rs12182997 - 15 Sep 2020
Abstract
Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, [...] Read more.
Ship detection in synthetic aperture radar (SAR) images is becoming a research hotspot. In recent years, as the rise of artificial intelligence, deep learning has almost dominated SAR ship detection community for its higher accuracy, faster speed, less human intervention, etc. However, today, there is still a lack of a reliable deep learning SAR ship detection dataset that can meet the practical migration application of ship detection in large-scene space-borne SAR images. Thus, to solve this problem, this paper releases a Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) from Sentinel-1, for small ship detection under large-scale backgrounds. LS-SSDD-v1.0 contains 15 large-scale SAR images whose ground truths are correctly labeled by SAR experts by drawing support from the Automatic Identification System (AIS) and Google Earth. To facilitate network training, the large-scale images are directly cut into 9000 sub-images without bells and whistles, providing convenience for subsequent detection result presentation in large-scale SAR images. Notably, LS-SSDD-v1.0 has five advantages: (1) large-scale backgrounds, (2) small ship detection, (3) abundant pure backgrounds, (4) fully automatic detection flow, and (5) numerous and standardized research baselines. Last but not least, combined with the advantage of abundant pure backgrounds, we also propose a Pure Background Hybrid Training mechanism (PBHT-mechanism) to suppress false alarms of land in large-scale SAR images. Experimental results of ablation study can verify the effectiveness of the PBHT-mechanism. LS-SSDD-v1.0 can inspire related scholars to make extensive research into SAR ship detection methods with engineering application value, which is conducive to the progress of SAR intelligent interpretation technology. Full article
Show Figures

Graphical abstract

Open AccessArticle
Automated Procurement of Training Data for Machine Learning Algorithm on Ship Detection Using AIS Information
Remote Sens. 2020, 12(9), 1443; https://doi.org/10.3390/rs12091443 - 02 May 2020
Cited by 2
Abstract
Development of convolutional neural network (CNN) optimized for object detection, led to significant developments in ship detection. Although training data critically affect the performance of the CNN-based training model, previous studies focused mostly on enhancing the architecture of the training model. This study [...] Read more.
Development of convolutional neural network (CNN) optimized for object detection, led to significant developments in ship detection. Although training data critically affect the performance of the CNN-based training model, previous studies focused mostly on enhancing the architecture of the training model. This study developed a sophisticated and automatic methodology to generate verified and robust training data by employing synthetic aperture radar (SAR) images and automatic identification system (AIS) data. The extraction of training data initiated from interpolating the discretely received AIS positions to the exact position of the ship at the time of image acquisition. The interpolation was conducted by applying a Kalman filter, followed by compensating the Doppler frequency shift. The bounding box for the ship was constructed tightly considering the installation of the AIS equipment and the exact size of the ship. From 18 Sentinel-1 SAR images using a completely automated procedure, 7489 training data were obtained, compared with a different set of training data from visual interpretation. The ship detection model trained using the automatic training data obtained 0.7713 of overall detection performance from 3 Sentinel-1 SAR images, which exceeded that of manual training data, evading the artificial structures of harbors and azimuth ambiguity ghost signals from detection. Full article
Show Figures

Graphical abstract

Back to TopTop