remotesensing-logo

Journal Browser

Journal Browser

Artificial Intelligence for Object Detection in Optical, Radar and Lidar Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (15 July 2022) | Viewed by 31537

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automatic Control and Robotics, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Krakow, Poland
Interests: computer vision; machine learning; deep neural networks

E-Mail Website
Guest Editor
AGH University of Science and Technology, Department of Automatic Control and Robotics, al. Mickiewicza 30, 30-059 Krakow, Poland
Interests: operation research; approximation algorithms; machine learning; multimedia analysis and processing; modelling discrete optimization problems

E-Mail Website
Guest Editor
Department of Photogrammetry, Remote Sensing of Environment and Spatial Engineering, Faculty of Mining Surveying and Environmental Engineering, AGH University of Science and Technology, Krakow, Poland
Interests: photogrammetry; remote sensing; laser scanning in engineering applications; digital image processing; UAV; AI; GIS
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the last decade artificial intelligence and especially deep neural networks become an extremely effective tool for automatic detection of objects, capable of analysing large amount of data in reasonable time. There are increasing number of applications related to the detection of objects and phenomena in satellite and aerial images, where the efficiency of deep networks is comparable or even higher than the efficiency of human experts, especially because the network can accept input of high dimensionality, such as hyperspectral images.

The scope of this special issue includes but is not limited to the detection of objects in:

  • optical images, including multispectral and hyperspectral images
  • radar images, which also includes all types of SAR imaging
  • lidar data
  • fusion of data from different sensors

Submissions can be related not only to the detection of physical objects but also to phenomena, which can be detected based on the earth surface observation. We also welcome papers where artificial intelligence is not directly used for object detection but for the processing of images, which can be preliminary step towards the detection, like for example fusion of data from different sensors, automatic calibration of sensors, SAR to optical image translation or image denoising.

Dr. Pawel Rotter
Dr. Wojciech Chmiel
Dr. Sławomir Mikrut
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • deep learning
  • object detection
  • multispectral images
  • hyperspectral images
  • Synthetic Aperture Radar
  • lidar
  • photogrammetry

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 1443 KiB  
Article
Co-Correcting: Combat Noisy Labels in Space Debris Detection
by Hui Li, Zhaodong Niu, Quan Sun and Yabo Li
Remote Sens. 2022, 14(20), 5261; https://doi.org/10.3390/rs14205261 - 21 Oct 2022
Cited by 4 | Viewed by 1547
Abstract
Space debris detection is vital to space missions and space situation awareness. Convolutional neural networks are introduced to detect space debris due to their excellent performance. However, noisy labels, caused by false alarms, exist in space debris detection, and cause ambiguous targets for [...] Read more.
Space debris detection is vital to space missions and space situation awareness. Convolutional neural networks are introduced to detect space debris due to their excellent performance. However, noisy labels, caused by false alarms, exist in space debris detection, and cause ambiguous targets for the training of networks, leading to networks overfitting the noisy labels and losing the ability to detect space debris. To remedy this challenge, we introduce label-noise learning to space debris detection and propose a novel label-noise learning paradigm, termed Co-correcting, to overcome the effects of noisy labels. Co-correcting comprises two identical networks, and the predictions of these networks serve as auxiliary supervised information to mutually correct the noisy labels of their peer networks. In this manner, the effect of noisy labels can be mitigated by the mutual rectification of the two networks. Empirical experiments show that Co-correcting outperforms other state-of-the-art methods of label-noise learning, such as Co-teaching and JoCoR, in space debris detection. Even with a high label noise rate, the network trained via Co-correcting can detect space debris with high detection probability. Full article
Show Figures

Graphical abstract

19 pages, 6706 KiB  
Article
Multistage Adaptive Point-Growth Network for Dense Point Cloud Completion
by Ruidong Hao, Zhonghui Wei, Xu He, Kaifeng Zhu, Jun Wang, Jiawei He and Lei Zhang
Remote Sens. 2022, 14(20), 5214; https://doi.org/10.3390/rs14205214 - 18 Oct 2022
Cited by 2 | Viewed by 1866
Abstract
The point cloud data from actual measurements are often sparse and incomplete, making it difficult to apply them directly to visual processing and 3D reconstruction. The point cloud completion task can predict missing parts based on a sparse and incomplete point cloud model. [...] Read more.
The point cloud data from actual measurements are often sparse and incomplete, making it difficult to apply them directly to visual processing and 3D reconstruction. The point cloud completion task can predict missing parts based on a sparse and incomplete point cloud model. However, the disordered and unstructured characteristics of point clouds make it difficult for neural networks to obtain detailed spatial structures and topological relationships, resulting in a challenging point cloud completion task. Existing point cloud completion methods can only predict the rough geometry of the point cloud, but cannot accurately predict the local details. To address the shortcomings of existing point cloud complementation methods, this paper describes a novel network for adaptive point cloud growth, MAPGNet, which generates a sparse skeletal point cloud using the skeletal features in the composite encoder, and then adaptively grows the local point cloud in the spherical neighborhood of each point using the growth features to complement the details of the point cloud in two steps. In this paper, the Offset Transformer module is added in the process of complementation to enhance the contextual connection between point clouds. As a result, MAPGNet improves the quality of the generated point clouds and recovers more local detail information. Comparing our algorithm with other state-of-the-art algorithms in different datasets, experimental results show that our algorithm has advantages in dense point cloud completion. Full article
Show Figures

Figure 1

25 pages, 1870 KiB  
Article
Unsupervised Radar Target Detection under Complex Clutter Background Based on Mixture Variational Autoencoder
by Xueling Liang, Bo Chen, Wenchao Chen, Penghui Wang and Hongwei Liu
Remote Sens. 2022, 14(18), 4449; https://doi.org/10.3390/rs14184449 - 6 Sep 2022
Cited by 4 | Viewed by 2111
Abstract
The clutter background in modern radar target detection is complex and changeable. The performance of classical detectors based on parametric statistical modeling methods is often degraded due to model mismatch. Existing data-driven deep learning methods require cumbersome and expensive annotations. Furthermore, the performance [...] Read more.
The clutter background in modern radar target detection is complex and changeable. The performance of classical detectors based on parametric statistical modeling methods is often degraded due to model mismatch. Existing data-driven deep learning methods require cumbersome and expensive annotations. Furthermore, the performance of the detection network is severely degraded when the detection scene changes, since the trained network with the data from one scene is not suitable for another scene with different data distribution. To this end, it is crucial to develop an unsupervised detection method that can finely model complex and changing clutter scenes. This problem is challenging yet rewarding because it completely eliminates the cost of obtaining cumbersome annotations. In this paper, we introduce GM-CVAE, a novel unsupervised Gaussian Mixture Variational Autoencoder with a one-dimensional Convolutional neural network approach to finely model complex and changing clutter. Furthermore, we develop an unsupervised narrow-band radar target detection strategy based on reconstructed likelihood. Comprehensive experiments are carried out to show that the proposed method realizes the refined modeling of clutter and guarantees superior detection performance in the simulated complex clutter environment. Compared with baselines, the proposed method shows better performance. Full article
Show Figures

Graphical abstract

17 pages, 2740 KiB  
Article
A Robust Underwater Multiclass Fish-School Tracking Algorithm
by Tao Liu, Shuangyan He, Haoyang Liu, Yanzhen Gu and Peiliang Li
Remote Sens. 2022, 14(16), 4106; https://doi.org/10.3390/rs14164106 - 21 Aug 2022
Cited by 6 | Viewed by 2647
Abstract
State-of-the-art multiple-object tracking methods are frequently applied to people or vehicle tracking, but rarely involve underwater-object tracking. Compared with the processing in non-underwater photos or videos, underwater fish tracking is challenging due to variations in light conditions, water turbidity levels, shape deformations, and [...] Read more.
State-of-the-art multiple-object tracking methods are frequently applied to people or vehicle tracking, but rarely involve underwater-object tracking. Compared with the processing in non-underwater photos or videos, underwater fish tracking is challenging due to variations in light conditions, water turbidity levels, shape deformations, and the similar appearances of fish. This article proposes a robust underwater fish-school tracking algorithm (FSTA). The FSTA is based on the tracking-by-detection paradigm. To solve the problem of low recognition accuracy in an underwater environment, we add an amendment detection module that uses prior knowledge to modify the detection result. Second, we introduce an underwater data association algorithm for aquatic non-rigid organisms that recombines representation and location information to refine the data matching process and improve the tracking results. The Resnet50-IBN network is used as a re-identification network to track fish. We introduce a triplet loss function based on a centroid to train the feature extraction network. The multiple-object tracking accuracy (MOTA) of the FSTA is 79.1% on the underwater dataset, which shows that it can achieve state-of-the-art performance in a complex real-world marine environment. Full article
Show Figures

Figure 1

24 pages, 5936 KiB  
Article
Long-Distance Multi-Vehicle Detection at Night Based on Gm-APD Lidar
by Yuanxue Ding, Yanchen Qu, Jianfeng Sun, Dakuan Du, Yanze Jiang and Hailong Zhang
Remote Sens. 2022, 14(15), 3553; https://doi.org/10.3390/rs14153553 - 24 Jul 2022
Cited by 6 | Viewed by 1821
Abstract
Long-distance multi-vehicle detection at night is critical in military operations. Due to insufficient light at night, the visual features of vehicles are difficult to distinguish, and many missed detections occur. This paper proposes a two-level detection method for long-distance nighttime multi-vehicles based on [...] Read more.
Long-distance multi-vehicle detection at night is critical in military operations. Due to insufficient light at night, the visual features of vehicles are difficult to distinguish, and many missed detections occur. This paper proposes a two-level detection method for long-distance nighttime multi-vehicles based on Gm-APD lidar intensity images and point cloud data. The method is divided into two levels. The first level is 2D detection, which enhances the local contrast of the intensity image and improves the brightness of weak and small objects. With the confidence threshold set, the detection result greater than the threshold is reserved as a reliable object, and the detection result less than the threshold is a suspicious object. In the second level of 3D recognition, the suspicious object area from the first level is converted into the corresponding point cloud classification judgment, and the object detection score is obtained through comprehensive judgment. Finally, the object results of the two-level recognition are merged into the final detection result. Experimental results show that the method achieves a detection accuracy of 96.38% and can effectively improve the detection accuracy of multiple vehicles at night, which is better than the current state-of-the-art detection methods. Full article
Show Figures

Figure 1

20 pages, 1349 KiB  
Article
Interrupted-Sampling Repeater Jamming-Suppression Method Based on a Multi-Stages Multi-Domains Joint Anti-Jamming Depth Network
by Xuesi He, Kuo Liao, Shupeng Peng, Zhenjie Tian and Jiyan Huang
Remote Sens. 2022, 14(14), 3445; https://doi.org/10.3390/rs14143445 - 18 Jul 2022
Cited by 11 | Viewed by 1816
Abstract
Jamming will seriously affect the detection ability of radar, so it is essential to suppress the jamming of radar echoes. Interrupted-sampling repeater jamming (ISRJ) based on a digital-radio-frequency-memory (DRFM) device can generate false targets at the victim radar by the interception and repeating [...] Read more.
Jamming will seriously affect the detection ability of radar, so it is essential to suppress the jamming of radar echoes. Interrupted-sampling repeater jamming (ISRJ) based on a digital-radio-frequency-memory (DRFM) device can generate false targets at the victim radar by the interception and repeating of the radar transmission signal, which is highly correlated with the true target signal. ISRJ can achieve main lobe jamming and has both deception and oppressive jamming effects, so it is difficult for the existing methods to suppress this jamming effectively. In this paper, we propose a deep-learning-based anti-jamming network, named MSMD-net (Multi-stage Multi-domain joint anti-jamming depth network), for suppressing ISRJ main lobe jamming in the radar echo. In the first stage of MSMD-net, considering that the target signal is difficult to detect under a high jamming-to-signal ratio (JSR), we propose a preprocessing method of limiting filtering on the time-frequency domain to reduce the JSR using the auxiliary knowledge of radar. In the second stage, taking advantage of the discontinuity of the jamming in the time domain, we propose a UT-net network that combines the U-net structure and the transformer module. The UT-net performs target feature extraction and signal reconstruction in the signal time-frequency domain and preliminarily realizes the suppression of the jamming component. In the third stage, combined with phase information, a one-dimensional complex residual convolution U-net network (ResCU-net) is constructed in the time domain to realize jamming filtering and signal recovery further. The experimental results show that MSMD-net can obtain the best jamming suppression effect under different transmitted signals, different jamming modes, and different jamming parameters. Full article
Show Figures

Figure 1

21 pages, 3706 KiB  
Article
IRSTFormer: A Hierarchical Vision Transformer for Infrared Small Target Detection
by Gao Chen, Weihua Wang and Sirui Tan
Remote Sens. 2022, 14(14), 3258; https://doi.org/10.3390/rs14143258 - 6 Jul 2022
Cited by 23 | Viewed by 2653
Abstract
Infrared small target detection occupies an important position in the infrared search and track system. The most common size of infrared images has developed to 640×512. The field-of-view (FOV) also increases significantly. As the result, there is more interference that [...] Read more.
Infrared small target detection occupies an important position in the infrared search and track system. The most common size of infrared images has developed to 640×512. The field-of-view (FOV) also increases significantly. As the result, there is more interference that hinders the detection of small targets in the image. However, the traditional model-driven methods do not have the capability of feature learning, resulting in poor adaptability to various scenes. Owing to the locality of convolution kernels, recent convolutional neural networks (CNN) cannot model the long-range dependency in the image to suppress false alarms. In this paper, we propose a hierarchical vision transformer-based method for infrared small target detection in larger size and FOV images of 640×512. Specifically, we design a hierarchical overlapped small patch transformer (HOSPT), instead of the CNN, to encode multi-scale features from the single-frame image. For the decoder, a top-down feature aggregation module (TFAM) is adopted to fuse features from adjacent scales. Furthermore, after analyzing existing loss functions, a simple yet effective combination is exploited to optimize the network convergence. Compared to other state-of-the-art methods, the normalized intersection-over-union (nIoU) on our IRST640 dataset and public SIRST dataset reaches 0.856 and 0.758. The detailed ablation experiments are conducted to validate the effectiveness and reasonability of each component in the method. Full article
Show Figures

Figure 1

18 pages, 6269 KiB  
Article
Automatic Calibration of a LiDAR–Camera System Based on Instance Segmentation
by Pawel Rotter, Maciej Klemiato and Pawel Skruch
Remote Sens. 2022, 14(11), 2531; https://doi.org/10.3390/rs14112531 - 25 May 2022
Cited by 4 | Viewed by 2223
Abstract
In this article, we propose a method for automatic calibration of a LiDAR–camera system, which can be used in autonomous cars. This approach does not require any calibration pattern, as calibration is only based on real traffic scenes observed by sensors; the results [...] Read more.
In this article, we propose a method for automatic calibration of a LiDAR–camera system, which can be used in autonomous cars. This approach does not require any calibration pattern, as calibration is only based on real traffic scenes observed by sensors; the results of camera image segmentation are compared with scanning LiDAR depth data. The proposed algorithm superimposes the edges of objects segmented by the Mask-RCNN network with depth discontinuities. The method can run in the background during driving, and it can automatically detect decalibration and correct corresponding rotation matrices in an online and near real-time mode. Experiments on the KITTI dataset demonstrated that, for input data of moderate quality, the algorithm could calculate and correct rotation matrices with an average accuracy of 0.23°. Full article
Show Figures

Graphical abstract

27 pages, 12758 KiB  
Article
An Improved Infrared and Visible Image Fusion Using an Adaptive Contrast Enhancement Method and Deep Learning Network with Transfer Learning
by Jameel Ahmed Bhutto, Lianfang Tian, Qiliang Du, Zhengzheng Sun, Lubin Yu and Toufique Ahmed Soomro
Remote Sens. 2022, 14(4), 939; https://doi.org/10.3390/rs14040939 - 15 Feb 2022
Cited by 7 | Viewed by 2332
Abstract
Deep learning (DL) has achieved significant attention in the field of infrared (IR) and visible (VI) image fusion, and several attempts have been made to enhance the quality of the final fused image. It produces better results than conventional methods; however, the captured [...] Read more.
Deep learning (DL) has achieved significant attention in the field of infrared (IR) and visible (VI) image fusion, and several attempts have been made to enhance the quality of the final fused image. It produces better results than conventional methods; however, the captured image cannot acquire useful information due to environments with poor lighting, fog, dense smoke, haze, and the noise generated by sensors. This paper proposes an adaptive fuzzy-based preprocessing enhancement method that automatically enhances the contrast of images with adaptive parameter calculation. The enhanced images are then decomposed into base and detail layers by anisotropic diffusion-based edge-preserving filters that remove noise while smoothing the edges. The detailed parts are fed into four convolutional layers of the VGG-19 network through transfer learning to extract features maps. These features maps are fused by multiple fusion strategies to obtain the final fused detailed layer. The base parts are fused by the PCA method to preserve the energy information. Experimental results reveal that our proposed method achieves state-of-the-art performance compared with existing fusion methods in a subjective evaluation through the visual experience of experts and statistical tests. Moreover, the objective assessment parameters are conducted by various parameters (FMI, SSIMa, API, EN, QFAB, and NFAB) which were used in the comparison method. The proposed method achieves 0.2651 to 0.3951, 0.5827 to 0.8469, 56.3710 to 71.9081, 4.0117 to 7.9907, and 0.6538 to 0.8727 gain for FMI, SSIMa, API, EN, and QFAB, respectively. At the same time, the proposed method has more noise reduction (0.3049 to 0.0021) that further justifies the efficacy of the proposed method than conventional methods. Full article
Show Figures

Figure 1

17 pages, 4440 KiB  
Article
Coupling Complementary Strategy to U-Net Based Convolution Neural Network for Detecting Lunar Impact Craters
by Yuqing Mao, Rongao Yuan, Wei Li and Yijing Liu
Remote Sens. 2022, 14(3), 661; https://doi.org/10.3390/rs14030661 - 29 Jan 2022
Cited by 7 | Viewed by 2691
Abstract
Lunar crater detection plays an important role in lunar exploration, while machine learning (ML) exhibits promising advantages in the field. However, previous ML works almost all used a single type of lunar map, such as an elevation map (DEM) or orthographic projection map [...] Read more.
Lunar crater detection plays an important role in lunar exploration, while machine learning (ML) exhibits promising advantages in the field. However, previous ML works almost all used a single type of lunar map, such as an elevation map (DEM) or orthographic projection map (WAC), to extract crater features; the two types of images have individual limitations on reflecting the crater features, which lead to insufficient feature information, in turn influencing the detection performance. To address this limitation, we, in this work, propose feature complementary of the two types of images and accordingly explore an advanced dual-path convolutional neural network (Dual-Path) based on a U-NET structure to effectively conduct feature integration. Dual-Path consists of a contracting path, bridging path, and expanding path. The contracting path separately extracts features from DEM and WAC images by means of two independent input branches, while the bridging layer integrates the two types of features by 1 × 1 convolution. Finally, the expanding path, coupled with the attention mechanism, further learns and optimizes the feature information. In addition, a special deep convolution block with a residual module is introduced to avoid network degradation and gradient disappearance. The ablation experiment and the comparison of four competitive models only using DEM features confirm that the feature complementary can effectively improve the detection performance and speed. Our model is further verified by different regions of the whole moon, exhibiting high robustness and potential in practical applications. Full article
Show Figures

Figure 1

11 pages, 3755 KiB  
Communication
License Plate Image Reconstruction Based on Generative Adversarial Networks
by Mianfen Lin, Liangxin Liu, Fei Wang, Jingcong Li and Jiahui Pan
Remote Sens. 2021, 13(15), 3018; https://doi.org/10.3390/rs13153018 - 1 Aug 2021
Cited by 11 | Viewed by 2848
Abstract
License plate image reconstruction plays an important role in Intelligent Transportation Systems. In this paper, a super-resolution image reconstruction method based on Generative Adversarial Networks (GAN) is proposed. The proposed method mainly consists of four parts: (1) pretreatment for the input image; (2) [...] Read more.
License plate image reconstruction plays an important role in Intelligent Transportation Systems. In this paper, a super-resolution image reconstruction method based on Generative Adversarial Networks (GAN) is proposed. The proposed method mainly consists of four parts: (1) pretreatment for the input image; (2) image features extraction using residual dense network; (3) introduction of progressive sampling, which can provide larger receptive field and more information details; (4) discriminator based on markovian discriminator (PatchGAN) can make a more accurate judgment, which guides the generator to reconstruct images with higher quality and details. Regarding the Chinese City Parking Dataset (CCPD) dataset, compared with the current better algorithm, the experiment results prove that our model has a higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) and less reconstruction time, which verifies the feasibility of our approach. Full article
Show Figures

Figure 1

Other

Jump to: Research

11 pages, 30613 KiB  
Technical Note
Patagonian Andes Landslides Inventory: The Deep Learning’s Way to Their Automatic Detection
by Bastian Morales, Angel Garcia-Pedrero, Elizabet Lizama, Mario Lillo-Saavedra, Consuelo Gonzalo-Martín, Ningsheng Chen and Marcelo Somos-Valenzuela
Remote Sens. 2022, 14(18), 4622; https://doi.org/10.3390/rs14184622 - 16 Sep 2022
Cited by 8 | Viewed by 2344
Abstract
Landslide inventories are crucial to studying the dynamics, associated risks, and effects of these geomorphological processes on the evolution of mountainous landscapes. The production of landslide maps is mainly based on manual visual interpretation methods of aerial and satellite images combined with field [...] Read more.
Landslide inventories are crucial to studying the dynamics, associated risks, and effects of these geomorphological processes on the evolution of mountainous landscapes. The production of landslide maps is mainly based on manual visual interpretation methods of aerial and satellite images combined with field surveys. In recent times, advances in machine learning methods have made it possible to explore new semi-automated landslide detection methodologies using remotely detected images. In this sense, developing new artificial intelligence models based on Deep Learning (DL) opens up an excellent opportunity to automate this arduous process. Although the Andes mountain range is one of the most geomorphologically active areas on the planet, the few investigations that use DL mainly focus on mountain ranges in Europe and Asia. One of the main reasons is the low density of landslide data available in the Andean areas, making it difficult to experiment with DL models requiring large data volumes. In this work, we seek to narrow the existing gap in the availability of landslide inventories in the area of the Patagonian Andes. In addition, the feasibility and efficiency of DL techniques are studied to develop landslide detection models in the Andes from the generated datasets. To achieve this goal, we generated in a manual process a datasets of 10,000 landslides for northern Chilean Patagonia (42–45°S), being the largest freely accessible landslide datasets in this region. We implement a machine learning model, through DL, to detect landslides in optical images of the Sentinel-2 constellation using a model based on the DeepLabv3+ architecture, a state-of-the-art deep learning network for semantic segmentation. Our results indicate that the algorithm detects landslides with an accuracy of 0.75 at the object level. For its part, the segmentation reaches a precision of 0.86, a recall of 0.74, and an F1-score of 0.79. The correlation of the segmentation measured through the Matthews correlation coefficient shows a value of 0.59, and the geometric similarity of the correctly detected landslides measured through the Jaccard score reaches 0.70. Although the model shows a good response in the testing area, errors are generated that can be explained by geometric and spectral relationships, which should be solved through new training approaches and data sets. Full article
Show Figures

Figure 1

14 pages, 3694 KiB  
Technical Note
Adaptive Network Detector for Radar Target in Changing Scenes
by He Jing, Yongqiang Cheng, Hao Wu and Hongqiang Wang
Remote Sens. 2021, 13(18), 3743; https://doi.org/10.3390/rs13183743 - 18 Sep 2021
Cited by 2 | Viewed by 1684
Abstract
Data-driven deep learning has been well applied in radar target detection. However, the performance of the detection network is severely degraded when the detection scene changes, since the trained network with the data from one scene is not suitable for another scene with [...] Read more.
Data-driven deep learning has been well applied in radar target detection. However, the performance of the detection network is severely degraded when the detection scene changes, since the trained network with the data from one scene is not suitable for another scene with different data distribution. In order to address this problem, an adaptive network detector combined with scene classification is proposed in this paper. Aiming at maximizing the posterior probability of the feature vectors, the scene classification network is arranged to control the output ratio of a group of detection sub-networks. Due to the uncertainty of classification error rate in traditional machine learning, the classifier with a controllable false alarm rate is constructed. In addition, a new network training strategy, which freezes the parameters of the scene classification network and selectively fine-tunes the parameters of detection sub-networks, is proposed for the adaptive network structure. Comprehensive experiments are carried out to demonstrate that the proposed method guarantees a high detection probability when the detection scene changes. Compared with some classical detectors, the adaptive network detector shows better performance. Full article
Show Figures

Graphical abstract

Back to TopTop