remotesensing-logo

Journal Browser

Journal Browser

Special Issue "Artificial Intelligence-Based Learning Approaches for Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 July 2022) | Viewed by 9833

Special Issue Editor

Dr. Gwanggil Jeon
E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, Incheon 22012, Korea
Interests: image processing; particularly image compression; motion estimation; demosaicking;and image enhancement; computational intelligence; fuzzy and rough sets theories
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing is a tool for comprehending the earth and supporting human–earth communications. Recent advances in remote sensing have led to a high-resolution monitoring of Earth on a global scale, providing a massive amount of earth observation data. These data must be processed with new levels of accuracy, complexity, security, achievement, and reliability. Therefore, applicable and consistent research on artificial intelligence-based learning methods and its applied image processing are needed for remote sensing. These methods can be general and specific tools of artificial intelligence, including regression models, neural networks, decision trees, information retrieval, reinforcement learning, graphical models, and decision processes. We trust that artificial intelligence, deep learning and data science methods will provide promising tools to overcome many challenging issues in remote sensing in terms of accuracy and reliability. This Special Issue is the second edition of “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”. In this second edition, our new Special Issue aims to report the latest advances and trends concerning the advanced artificial learning and data science techniques to the remote sensing data processing issues. Papers of both theoretical and applicative nature, as well as contributions regarding new advanced artificial learning and data science techniques for the remote sensing research community, are welcome.

Dr. Gwanggil Jeon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI (architectures, models, learning, etc.) and data science approach for remote sensing
  • Explainable and interpretable machine learning
  • HPC-based and distributed machine learning for large-scale image analysis
  • Reinforcement learning for remote sensing
  • Information retrieval for remote sensing
  • Big data analytics for beyond 5G
  • Edge/fog computing for remote sensing
  • IoT data analytics in remote sensing
  • Data-driven applications in remote sensing

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
RBFA-Net: A Rotated Balanced Feature-Aligned Network for Rotated SAR Ship Detection and Classification
Remote Sens. 2022, 14(14), 3345; https://doi.org/10.3390/rs14143345 - 11 Jul 2022
Viewed by 394
Abstract
Ship detection with rotated bounding boxes in synthetic aperture radar (SAR) images is now a hot spot. However, there are still some obstacles, such as multi-scale ships, misalignment between rotated anchors and features, and the opposite requirements for spatial sensitivity of regression tasks [...] Read more.
Ship detection with rotated bounding boxes in synthetic aperture radar (SAR) images is now a hot spot. However, there are still some obstacles, such as multi-scale ships, misalignment between rotated anchors and features, and the opposite requirements for spatial sensitivity of regression tasks and classification tasks. In order to solve these problems, we propose a rotated balanced feature-aligned network (RBFA-Net) where three targeted networks are designed. They are, respectively, a balanced attention feature pyramid network (BAFPN), an anchor-guided feature alignment network (AFAN) and a rotational detection network (RDN). BAFPN is an improved FPN, with attention module for fusing and enhancing multi-level features, by which we can decrease the negative impact of multi-scale ship feature differences. In AFAN, we adopt an alignment convolution layer to adaptively align the convolution features according to rotated anchor boxes for solving the misalignment problem. In RDN, we propose a task decoupling module (TDM) to adjust the feature maps, respectively, for solving the conflict between the regression task and classification task. In addition, we adopt a balanced L1 loss to balance the classification loss and regression loss. Based on the SAR rotation ship detection dataset, we conduct extensive ablation experiments and compare our RBFA-Net with eight other state-of-the-art rotated detection networks. The experiment results show that among the eight state-of-the-art rotated detection networks, RBFA-Net makes a 7.19% improvement with mean average precision compared to the second-best network. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

Article
A Classifying-Inversion Method of Offshore Atmospheric Duct Parameters Using AIS Data Based on Artificial Intelligence
Remote Sens. 2022, 14(13), 3197; https://doi.org/10.3390/rs14133197 - 03 Jul 2022
Viewed by 401
Abstract
Atmospheric duct parameters inversion is an important aspect of microwave-band radar and communication system performance evaluation. AIS (Automatic Identification System) is one of the signal sources used for atmospheric duct parameters inversion. Before the inversion of atmospheric duct parameters, determining the type of [...] Read more.
Atmospheric duct parameters inversion is an important aspect of microwave-band radar and communication system performance evaluation. AIS (Automatic Identification System) is one of the signal sources used for atmospheric duct parameters inversion. Before the inversion of atmospheric duct parameters, determining the type of atmospheric duct plays an important role in the inversion results, but the current inversion methods ignore this point. We outlined a classifying-inversion method of atmospheric duct parameters using AIS signals combined with artificial intelligence. The method consists of an atmospheric duct classification model and a parameter inversion model. The classification model judges the type of atmospheric duct, and the inversion model inverts the atmospheric duct parameters according to the type of atmospheric duct. Our findings demonstrated that the accuracy of the atmospheric duct classification model based on deep neural network (DNN) even exceeds 97%, and the atmospheric duct parameters inversion model has better inversion accuracy than that of the traditional method, thereby illustrating the effectiveness and accuracy of this novel method. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

Article
Using Open Vector-Based Spatial Data to Create Semantic Datasets for Building Segmentation for Raster Data
Remote Sens. 2022, 14(12), 2745; https://doi.org/10.3390/rs14122745 - 07 Jun 2022
Cited by 1 | Viewed by 975
Abstract
With increasing access to open spatial data, it is possible to improve the quality of analyses carried out in the preliminary stages of the investment process. The extraction of buildings from raster data is an important process, especially for urban, planning and environmental [...] Read more.
With increasing access to open spatial data, it is possible to improve the quality of analyses carried out in the preliminary stages of the investment process. The extraction of buildings from raster data is an important process, especially for urban, planning and environmental studies. It allows, after processing, to represent buildings registered on a given image, e.g., in a vector format. With an actual image it is possible to obtain current information on the location of buildings in a defined area. At the same time, in recent years, there has been huge progress in the use of machine learning algorithms for object identification purposes. In particular, the semantic segmentation algorithms of deep convolutional neural networks which are based on the extraction of features from an image by means of masking have proven themselves here. The main problem with the application of semantic segmentation is the limited availability of masks, i.e., labelled data for training the network. Creating datasets based on manual labelling of data is a tedious, time consuming and capital-intensive process. Furthermore, any errors may be reflected in later analysis results. Therefore, this paper aims to show how to automate the process of data labelling of cadastral data from open spatial databases using convolutional neural networks, and to identify and extract buildings from high resolution orthophotomaps based on this data. The conducted research has shown that automatic feature extraction using semantic ML segmentation on the basis of data from open spatial databases is possible and can provide adequate quality of results. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

Article
A Sparse-Model-Driven Network for Efficient and High-Accuracy InSAR Phase Filtering
Remote Sens. 2022, 14(11), 2614; https://doi.org/10.3390/rs14112614 - 30 May 2022
Viewed by 393
Abstract
Phase filtering is a vital step for interferometric synthetic aperture radar (InSAR) terrain elevation measurements. Existing phase filtering methods can be divided into two categories: traditional model-based and deep learning (DL)-based. Previous studies have shown that DL-based methods are frequently superior to traditional [...] Read more.
Phase filtering is a vital step for interferometric synthetic aperture radar (InSAR) terrain elevation measurements. Existing phase filtering methods can be divided into two categories: traditional model-based and deep learning (DL)-based. Previous studies have shown that DL-based methods are frequently superior to traditional ones. However, most of the existing DL-based methods are purely data-driven and neglect the filtering model, so that they often need to use a large-scale complex architecture to fit the huge training sets. The issue brings a challenge to improve the accuracy of interferometric phase filtering without sacrificing speed. Therefore, we propose a sparse-model-driven network (SMD-Net) for efficient and high-accuracy InSAR phase filtering by unrolling the sparse regularization (SR) algorithm to solve the filtering model into a network. Unlike the existing DL-based filtering methods, the SMD-Net models the physical process of filtering in the network and contains fewer layers and parameters. It is thus expected to ensure the accuracy of the filtering without sacrificing speed. In addition, unlike the traditional SR algorithm setting the spare transform by handcrafting, a convolutional neural network (CNN) module was established to adaptively learn such a transform, which significantly improved the filtering performance. Extensive experimental results on the simulated and measured data demonstrated that the proposed method outperformed several advanced InSAR phase filtering methods in both accuracy and speed. In addition, to verify the filtering performance of the proposed method under small training samples, the training samples were reduced to 10%. The results show that the performance of the proposed method was comparable on the simulated data and superior on the real data compared with another DL-based method, which demonstrates that our method is not constrained by the requirement of a huge number of training samples. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

Article
Analysing Process and Probability of Built-Up Expansion Using Machine Learning and Fuzzy Logic in English Bazar, West Bengal
Remote Sens. 2022, 14(10), 2349; https://doi.org/10.3390/rs14102349 - 12 May 2022
Cited by 1 | Viewed by 563
Abstract
The study sought to investigate the process of built-up expansion and the probability of built-up expansion in the English Bazar Block of West Bengal, India, using multitemporal Landsat satellite images and an integrated machine learning algorithm and fuzzy logic model. The land use [...] Read more.
The study sought to investigate the process of built-up expansion and the probability of built-up expansion in the English Bazar Block of West Bengal, India, using multitemporal Landsat satellite images and an integrated machine learning algorithm and fuzzy logic model. The land use and land cover (LULC) classification were prepared using a support vector machine (SVM) classifier for 2001, 2011, and 2021. The landscape fragmentation technique using the landscape fragmentation tool (extension for ArcGIS software) and frequency approach were proposed to model the process of built-up expansion. To create the built-up expansion probability model, the dominance, diversity, and connectivity index of the built-up areas for each year were created and then integrated with fuzzy logic. The results showed that, during 2001–2021, the built-up areas increased by 21.67%, while vegetation and water bodies decreased by 9.28 and 4.63%, respectively. The accuracy of the LULC maps for 2001, 2011, and 2021 was 90.05, 93.67, and 96.24%, respectively. According to the built-up expansion model, 9.62% of the new built-up areas was created in recent decades. The built-up expansion probability model predicted that 21.46% of regions would be converted into built-up areas. This study will assist decision-makers in proposing management strategies for systematic urban growth that do not damage the environment. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

Article
GCBANet: A Global Context Boundary-Aware Network for SAR Ship Instance Segmentation
Remote Sens. 2022, 14(9), 2165; https://doi.org/10.3390/rs14092165 - 30 Apr 2022
Cited by 2 | Viewed by 1057
Abstract
Synthetic aperture radar (SAR) is an advanced microwave sensor, which has been widely used in ocean surveillance, and its operation is not affected by light and weather. SAR ship instance segmentation can provide not only the box-level ship location but also the pixel-level [...] Read more.
Synthetic aperture radar (SAR) is an advanced microwave sensor, which has been widely used in ocean surveillance, and its operation is not affected by light and weather. SAR ship instance segmentation can provide not only the box-level ship location but also the pixel-level ship contour, which plays an important role in ocean surveillance. However, most existing methods are provided with limited box positioning ability, hence hindering further accuracy improvement of instance segmentation. To solve the problem, we propose a global context boundary-aware network (GCBANet) for better SAR ship instance segmentation. Specifically, we propose two novel blocks to guarantee GCBANet’s excellent performance, i.e., a global context information modeling block (GCIM-Block) which is used to capture spatial global long-range dependences of ship contextual surroundings, enabling larger receptive fields, and a boundary-aware box prediction block (BABP-Block) which is used to estimate ship boundaries, achieving better cross-scale box prediction. We conduct ablation studies to confirm each block’s effectiveness. Ultimately, on two public SSDD and HRSID datasets, GCBANet outperforms the other nine competitive models. On SSDD, it achieves 2.8% higher box average precision (AP) and 3.5% higher mask AP than the existing best model; on HRSID, they are 2.7% and 1.9%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

Article
SDTGAN: Generation Adversarial Network for Spectral Domain Translation of Remote Sensing Images of the Earth Background Based on Shared Latent Domain
Remote Sens. 2022, 14(6), 1359; https://doi.org/10.3390/rs14061359 - 11 Mar 2022
Viewed by 677
Abstract
The synthesis of spectral remote sensing images of the Earth’s background is affected by various factors such as the atmosphere, illumination and terrain, which makes it difficult to simulate random disturbance and real textures. Based on the shared latent domain hypothesis and generation [...] Read more.
The synthesis of spectral remote sensing images of the Earth’s background is affected by various factors such as the atmosphere, illumination and terrain, which makes it difficult to simulate random disturbance and real textures. Based on the shared latent domain hypothesis and generation adversarial network, this paper proposes the SDTGAN method to mine the correlation between the spectrum and directly generate target spectral remote sensing images of the Earth’s background according to the source spectral images. The introduction of shared latent domain allows multi-spectral domains connect to each other without the need to build a one-to-one model. Meanwhile, additional feature maps are introduced to fill in the lack of information in the spectrum and improve the geographic accuracy. Through supervised training with a paired dataset, cycle consistency loss, and perceptual loss, the uniqueness of the output result is guaranteed. Finally, the experiments on the Fengyun satellite observation data show that the proposed SDTGAN method performs better than the baseline models in remote sensing image spectrum translation. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

Article
Lite-YOLOv5: A Lightweight Deep Learning Detector for On-Board Ship Detection in Large-Scene Sentinel-1 SAR Images
Remote Sens. 2022, 14(4), 1018; https://doi.org/10.3390/rs14041018 - 20 Feb 2022
Cited by 9 | Viewed by 2057
Abstract
Synthetic aperture radar (SAR) satellites can provide microwave remote sensing images without weather and light constraints, so they are widely applied in the maritime monitoring field. Current SAR ship detection methods based on deep learning (DL) are difficult to deploy on satellites, because [...] Read more.
Synthetic aperture radar (SAR) satellites can provide microwave remote sensing images without weather and light constraints, so they are widely applied in the maritime monitoring field. Current SAR ship detection methods based on deep learning (DL) are difficult to deploy on satellites, because these methods usually have complex models and huge calculations. To solve this problem, based on the You Only Look Once version 5 (YOLOv5) algorithm, we propose a lightweight on-board SAR ship detector called Lite-YOLOv5, which (1) reduces the model volume; (2) decreases the floating-point operations (FLOPs); and (3) realizes the on-board ship detection without sacrificing accuracy. First, in order to obtain a lightweight network, we design a lightweight cross stage partial (L-CSP) module to reduce the amount of calculation and we apply network pruning for a more compact detector. Then, in order to ensure the excellent detection performance, we integrate a histogram-based pure backgrounds classification (HPBC) module, a shape distance clustering (SDC) module, a channel and spatial attention (CSA) module, and a hybrid spatial pyramid pooling (H-SPP) module to improve detection performance. To evaluate the on-board SAR ship detection ability of Lite-YOLOv5, we also transplant it to the embedded platform NVIDIA Jetson TX2. Experimental results on the Large-Scale SAR Ship Detection Dataset-v1.0 (LS-SSDD-v1.0) show that Lite-YOLOv5 can realize lightweight architecture with a 2.38 M model volume (14.18% of model size of YOLOv5), on-board ship detection with a low computation cost (26.59% of FLOPs of YOLOv5), and superior detection accuracy (1.51% F1 improvement compared with YOLOv5). Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Graphical abstract

Article
ShadowDeNet: A Moving Target Shadow Detection Network for Video SAR
Remote Sens. 2022, 14(2), 320; https://doi.org/10.3390/rs14020320 - 11 Jan 2022
Cited by 1 | Viewed by 530
Abstract
Most existing SAR moving target shadow detectors not only tend to generate missed detections because of their limited feature extraction capacity among complex scenes, but also tend to bring about numerous perishing false alarms due to their poor foreground–background discrimination capacity. Therefore, to [...] Read more.
Most existing SAR moving target shadow detectors not only tend to generate missed detections because of their limited feature extraction capacity among complex scenes, but also tend to bring about numerous perishing false alarms due to their poor foreground–background discrimination capacity. Therefore, to solve these problems, this paper proposes a novel deep learning network called “ShadowDeNet” for better shadow detection of moving ground targets on video synthetic aperture radar (SAR) images. It utilizes five major tools to guarantee its superior detection performance, i.e., (1) histogram equalization shadow enhancement (HESE) for enhancing shadow saliency to facilitate feature extraction, (2) transformer self-attention mechanism (TSAM) for focusing on regions of interests to suppress clutter interferences, (3) shape deformation adaptive learning (SDAL) for learning moving target deformed shadows to conquer motion speed variations, (4) semantic-guided anchor-adaptive learning (SGAAL) for generating optimized anchors to match shadow location and shape, and (5) online hard-example mining (OHEM) for selecting typical difficult negative samples to improve background discrimination capacity. We conduct extensive ablation studies to confirm the effectiveness of the above each contribution. We perform experiments on the public Sandia National Laboratories (SNL) video SAR data. Experimental results reveal the state-of-the-art performance of ShadowDeNet, with a 66.01% best f1 accuracy, in contrast to the other five competitive methods. Specifically, ShadowDeNet is superior to the experimental baseline Faster R-CNN by a 9.00% f1 accuracy, and superior to the existing first-best model by a 4.96% f1 accuracy. Furthermore, ShadowDeNet merely sacrifices a slight detection speed in an acceptable range. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

Article
A Deep Learning-Based Generalized System for Detecting Pine Wilt Disease Using RGB-Based UAV Images
Remote Sens. 2022, 14(1), 150; https://doi.org/10.3390/rs14010150 - 30 Dec 2021
Cited by 1 | Viewed by 828
Abstract
Pine wilt is a devastating disease that typically kills affected pine trees within a few months. In this paper, we confront the problem of detecting pine wilt disease. In the image samples that have been used for pine wilt disease detection, there is [...] Read more.
Pine wilt is a devastating disease that typically kills affected pine trees within a few months. In this paper, we confront the problem of detecting pine wilt disease. In the image samples that have been used for pine wilt disease detection, there is high ambiguity due to poor image resolution and the presence of “disease-like” objects. We therefore created a new dataset using large-sized orthophotographs collected from 32 cities, 167 regions, and 6121 pine wilt disease hotspots in South Korea. In our system, pine wilt disease was detected in two stages: n the first stage, the disease and hard negative samples were collected using a convolutional neural network. Because the diseased areas varied in size and color, and as the disease manifests differently from the early stage to the late stage, hard negative samples were further categorized into six different classes to simplify the complexity of the dataset. Then, in the second stage, we used an object detection model to localize the disease and “disease-like” hard negative samples. We used several image augmentation methods to boost system performance and avoid overfitting. The test process was divided into two phases: a patch-based test and a real-world test. During the patch-based test, we used the test-time augmentation method to obtain the average prediction of our system across multiple augmented samples of data, and the prediction results showed a mean average precision of 89.44% in five-fold cross validation, thus representing an increase of around 5% over the alternative system. In the real-world test, we collected 10 orthophotographs in various resolutions and areas, and our system successfully detected 711 out of 730 potential disease spots. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Learning Approaches for Remote Sensing)
Show Figures

Figure 1

Back to TopTop