Special Issue "Deep Learning for Radar and Sonar Image Processing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 1 December 2021.

Special Issue Editors

Prof. Dr. Alexandre Baussard
E-Mail Website
Guest Editor
Institut Charles Delaunay, Université de Technologie de Troyes, 10000 Troyes, France
Interests: electromagnetic and acoustic systems; inverse problems; machine learning; multiscale/multiresolution signal and image processing
Special Issues and Collections in MDPI journals
Prof. Dr. Ming-Der Yang
E-Mail Website
Guest Editor
Department of Civil Engineering, National Chung Hsing University, 250 Kuokuang Rd., Taichung 402, Taiwan
Interests: image processing; AI; UAVs; civil water conservancy; disaster prevention project; satellite telemetry
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Over the past few years, radar and sonar image processing and understanding, for both civilian and defense applications, have taken advantage of the breakthrough of artificial intelligence, especially deep learning. Unfortunately, specialists from the radar and sonar fields do not interact much with each other. The aim of this Special Issue is to increase these exchanges and allow experts from other areas to understand the specifies of radar and sonar problems. Indeed, radar and sonar images have some particularities, compared to common optical images. Thus, processing these data requires certain precautions, and specific developments must be made to address applications such as image segmentation or object detection. However, one of the main problems, especially in defense applications, is the lack of data. To overcome this problem, several solutions can be considered such as image synthesis using generative adversarial nets (GANs) to create or increase the size of the training sets, domain adaptation or transfer learning.

Topics for this Special Issue on deep learning for radar and sonar image processing include but are not limited to the following:

  • Image segmentation;
  • Object detection;
  • Object classification;
  • Image synthesis;
  • Domain adaptation;
  • Transfer learning;
  • Supervised, semisupervised, and unsupervised learning.

Prof. Dr. Alexandre Baussard
Prof. Dr. Ming-Der Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • radar
  • sonar
  • deep learning
  • segmentation
  • detection
  • classification
  • image synthesis
  • domain adaptation
  • transfer learning

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Unknown SAR Target Identification Method Based on Feature Extraction Network and KLD–RPA Joint Discrimination
Remote Sens. 2021, 13(15), 2901; https://doi.org/10.3390/rs13152901 - 23 Jul 2021
Viewed by 329
Abstract
Recently, deep learning (DL) has been successfully applied in automatic target recognition (ATR) tasks of synthetic aperture radar (SAR) images. However, limited by the lack of SAR image target datasets and the high cost of labeling, these existing DL based approaches can only [...] Read more.
Recently, deep learning (DL) has been successfully applied in automatic target recognition (ATR) tasks of synthetic aperture radar (SAR) images. However, limited by the lack of SAR image target datasets and the high cost of labeling, these existing DL based approaches can only accurately recognize the target in the training dataset. Therefore, high precision identification of unknown SAR targets in practical applications is one of the important capabilities that the SAR–ATR system should equip. To this end, we propose a novel DL based identification method for unknown SAR targets with joint discrimination. First of all, the feature extraction network (FEN) trained on a limited dataset is used to extract the SAR target features, and then the unknown targets are roughly identified from the known targets by computing the Kullback–Leibler divergence (KLD) of the target feature vectors. For the targets that cannot be distinguished by KLD, their feature vectors perform t-distributed stochastic neighbor embedding (t-SNE) dimensionality reduction processing to calculate the relative position angle (RPA). Finally, the known and unknown targets are finely identified based on RPA. Experimental results conducted on the MSTAR dataset demonstrate that the proposed method can achieve higher identification accuracy of unknown SAR targets than existing methods while maintaining high recognition accuracy of known targets. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

Article
Fast Complex-Valued CNN for Radar Jamming Signal Recognition
Remote Sens. 2021, 13(15), 2867; https://doi.org/10.3390/rs13152867 - 22 Jul 2021
Viewed by 344
Abstract
Jamming is a big threat to the survival of a radar system. Therefore, the recognition of radar jamming signal type is a part of radar countermeasure. Recently, convolutional neural networks (CNNs) have shown their effectiveness in radar signal processing, including jamming signal recognition. [...] Read more.
Jamming is a big threat to the survival of a radar system. Therefore, the recognition of radar jamming signal type is a part of radar countermeasure. Recently, convolutional neural networks (CNNs) have shown their effectiveness in radar signal processing, including jamming signal recognition. However, most of existing CNN methods do not regard radar jamming as a complex value signal. In this study, a complex-valued CNN (CV-CNN) is investigated to fully explore the inherent characteristics of a radar jamming signal, and we find that we can obtain better recognition accuracy using this method compared with a real-valued CNN (RV-CNN). CV-CNNs contain more parameters, which need more inference time. To reduce the parameter redundancy and speed up the recognition time, a fast CV-CNN (F-CV-CNN), which is based on pruning, is proposed for radar jamming signal fast recognition. The experimental results show that the CV-CNN and F-CV-CNN methods obtain good recognition performance in terms of accuracy and speed. The proposed methods open a new window for future research, which shows a huge potential of CV-CNN-based methods for radar signal processing. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

Article
A Universal Automatic Bottom Tracking Method of Side Scan Sonar Data Based on Semantic Segmentation
Remote Sens. 2021, 13(10), 1945; https://doi.org/10.3390/rs13101945 - 17 May 2021
Cited by 1 | Viewed by 323
Abstract
Determining the altitude of side-scan sonar (SSS) above the seabed is critical to correct the geometric distortions in the sonar images. Usually, a technology named bottom tracking is applied to estimate the distance between the sonar and the seafloor. However, the traditional methods [...] Read more.
Determining the altitude of side-scan sonar (SSS) above the seabed is critical to correct the geometric distortions in the sonar images. Usually, a technology named bottom tracking is applied to estimate the distance between the sonar and the seafloor. However, the traditional methods for bottom tracking often require pre-defined thresholds and complex optimization processes, which make it difficult to achieve ideal results in complex underwater environments without manual intervention. In this paper, a universal automatic bottom tracking method is proposed based on semantic segmentation. First, the waterfall images generated from SSS backscatter sequences are labeled as water column (WC) and seabed parts, then split into specific patches to build the training dataset. Second, a symmetrical information synthesis module (SISM) is designed and added to DeepLabv3+, which not only weakens the strong echoes in the WC area, but also gives the network the capability of considering the symmetry characteristic of bottom lines, and most importantly, the independent module can be easily combined with any other neural networks. Then, the integrated network is trained with the established dataset. Third, a coarse-to-fine segmentation strategy with the well-trained model is proposed to segment the SSS waterfall images quickly and accurately. Besides, a fast bottom line search algorithm is proposed to further reduce the time consumption of bottom tracking. Finally, the proposed method is validated by the data measured with several commonly used SSSs in various underwater environments. The results show that the proposed method can achieve the bottom tracking accuracy of 1.1 pixels of mean error and 1.26 pixels of standard deviation at the speed of 2128 ping/s, and is robust to interference factors. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Figure 1

Article
Bottom Detection from Backscatter Data of Conventional Side Scan Sonars through 1D-UNet
Remote Sens. 2021, 13(5), 1024; https://doi.org/10.3390/rs13051024 - 08 Mar 2021
Viewed by 570
Abstract
As widely applicated in many underwater research fields, conventional side-scan sonars require the sonar height to be at the seabed for geocoding seabed images. However, many interference factors, including compensation with unknown gains, suspended matters, etc., would bring difficulties in bottom detection. Existing [...] Read more.
As widely applicated in many underwater research fields, conventional side-scan sonars require the sonar height to be at the seabed for geocoding seabed images. However, many interference factors, including compensation with unknown gains, suspended matters, etc., would bring difficulties in bottom detection. Existing methods need manual parameter setups or to use postprocessing methods, which limits automatic and real-time processing in complex situations. To solve this problem, a one-dimensional U-Net (1D-UNet) model for sea bottom detection of side-scan data and the bottom detection and tracking method based on 1D-UNet are proposed in this work. First, the basic theory of sonar bottom detection and the interference factors is introduced, which indicates that deep learning of the bottom is a feasible solution. Then, a 1D-UNet model for detecting the sea bottom position from the side-scan backscatter strength sequences is proposed, and the structure and implementation of this model are illustrated in detail. Finally, the bottom detection and tracking algorithms of a single ping and continuous pings are presented on the basis of the proposed model. The measured side-scan sonar data in Meizhou Bay and Bayuquan District were selected in the experiments to verify the model and methods. The 1D-UNet model was first trained and applied with the side-scan data in Meizhou Bay. The training and validation accuracies were 99.92% and 99.77%, respectively, and the sea bottom detection accuracy of the training survey line was 99.88%. The 1D-UNet model showed good robustness to the interference factors of bottom detection and fully real-time performance in comparison with other methods. Moreover, the trained 1D-UNet model is used to process the data in the Bayuquan District for proving model generality. The proposed 1D-UNet model for bottom detection has been proven effective for side-scan sonar data and also has great potentials in wider applications on other types of sonars. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

Article
Multi-Block Mixed Sample Semi-Supervised Learning for SAR Target Recognition
Remote Sens. 2021, 13(3), 361; https://doi.org/10.3390/rs13030361 - 21 Jan 2021
Viewed by 506
Abstract
In recent years, synthetic aperture radar (SAR) automatic target recognition has played a crucial role in multiple fields and has received widespread attention. Compared with optical image recognition with massive annotation data, lacking sufficient labeled images limits the performance of the SAR automatic [...] Read more.
In recent years, synthetic aperture radar (SAR) automatic target recognition has played a crucial role in multiple fields and has received widespread attention. Compared with optical image recognition with massive annotation data, lacking sufficient labeled images limits the performance of the SAR automatic target recognition (ATR) method based on deep learning. It is expensive and time-consuming to annotate the targets for SAR images, while it is difficult for unsupervised SAR target recognition to meet the actual needs. In this situation, we propose a semi-supervised sample mixing method for SAR target recognition, named multi-block mixed (MBM), which can effectively utilize the unlabeled samples. During the data preprocessing stage, a multi-block mixed method is used to interpolate a small part of the training image to generate new samples. Then, the new samples are used to improve the recognition accuracy of the model. To verify the effectiveness of the proposed method, experiments are carried out on the moving and stationary target acquisition and recognition (MSTAR) data set. The experimental results fully demonstrate that the proposed MBM semi-supervised learning method can effectively address the problem of annotation insufficiency in SAR data sets and can learn valuable information from unlabeled samples, thereby improving the recognition performance. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

Article
A Novel LSTM Model with Interaction Dual Attention for Radar Echo Extrapolation
Remote Sens. 2021, 13(2), 164; https://doi.org/10.3390/rs13020164 - 06 Jan 2021
Viewed by 605
Abstract
The task of precipitation nowcasting is significant in the operational weather forecast. The radar echo map extrapolation plays a vital role in this task. Recently, deep learning techniques such as Convolutional Recurrent Neural Network (ConvRNN) models have been designed to solve the task. [...] Read more.
The task of precipitation nowcasting is significant in the operational weather forecast. The radar echo map extrapolation plays a vital role in this task. Recently, deep learning techniques such as Convolutional Recurrent Neural Network (ConvRNN) models have been designed to solve the task. These models, albeit performing much better than conventional optical flow based approaches, suffer from a common problem of underestimating the high echo value parts. The drawback is fatal to precipitation nowcasting, as the parts often lead to heavy rains that may cause natural disasters. In this paper, we propose a novel interaction dual attention long short-term memory (IDA-LSTM) model to address the drawback. In the method, an interaction framework is developed for the ConvRNN unit to fully exploit the short-term context information by constructing a serial of coupled convolutions on the input and hidden states. Moreover, a dual attention mechanism on channels and positions is developed to recall the forgotten information in the long term. Comprehensive experiments have been conducted on CIKM AnalytiCup 2017 data sets, and the results show the effectiveness of the IDA-LSTM in addressing the underestimation drawback. The extrapolation performance of IDA-LSTM is superior to that of the state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Figure 1

Back to TopTop