Emerging Research in Target Detection and Recognition in Remote Sensing Images

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Processes".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 5328

Special Issue Editors

College of Electronic Science and Technology, National University of Defense Technology, Changsha 410000, China
Interests: intelligent interpretation of remote sensing images; remote sensing image object detection; remote sensing image target recognition
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100000, China
Interests: synthetic aperture radar (SAR); polarimetric SAR image processing; remote sensing
College of Electronic science and Technology, National University of Defense Technology, Changsha 410000, China
Interests: remote sensing; pattern recognition; image processing; target detection

Special Issue Information

Dear Colleagues,

In the field of Earth observation, the massive remote sensing data obtained by a large number of in orbit satellites or aircrafts brings more observational information and processing challenges for remote sensing image interpretation. The detection, recognition, and tracking of various types of high-value artificial targets on the ground and sea has become one of the hotspots in the processing and application of Earth observation information. In recent years, a large amount of rapid target detection, recognition, and tracking methods based on artificial intelligence technology have brought many beneficial solutions to the processing of remote sensing Earth observation information, and have had a significant impact in the field of remote sensing. They have provided promising tools for solving many challenging issues in accuracy and reliability of target detection and recognition in remote sensing images.

In this special issue, we plan to compile a series of papers to report on new methods and technologies for object detection and recognition in remote sensing images that have emerged in recent years. We anticipate that new research will leverage new methods and technologies such as artificial intelligence to solve more practical problems in remote sensing image applications.

The article may cover, but is not limited to, the following topics:

  • Advanced artificial intelligence based object detection/recognition/tracking;
  • Remote sensing image change detection/semantic segmentation;
  • Remote sensing multi-sensor data fusion/multimodal data analysis;
  • Remote sensing image super-resolution/restoration;
  • Unsupervised/weak supervised learning for remote sensing image target detection or recognition;
  • Advanced artificial intelligence technology for remote sensing applications;
  • Intelligent detection and recognition of composite targets based on knowledge graphs or knowledge reasoning.

Dr. Tao Tang
Dr. Canbin Hu
Dr. Yuli Sun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • image processing
  • image interpretation
  • target detection
  • classification/recognition
  • deep networks
  • knowledge graphs

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6769 KiB  
Article
A Novel Dynamic Contextual Feature Fusion Model for Small Object Detection in Satellite Remote-Sensing Images
by Hongbo Yang and Shi Qiu
Information 2024, 15(4), 230; https://doi.org/10.3390/info15040230 - 18 Apr 2024
Viewed by 443
Abstract
Ground objects in satellite images pose unique challenges due to their low resolution, small pixel size, lack of texture features, and dense distribution. Detecting small objects in satellite remote-sensing images is a difficult task. We propose a new detector focusing on contextual information [...] Read more.
Ground objects in satellite images pose unique challenges due to their low resolution, small pixel size, lack of texture features, and dense distribution. Detecting small objects in satellite remote-sensing images is a difficult task. We propose a new detector focusing on contextual information and multi-scale feature fusion. Inspired by the notion that surrounding context information can aid in identifying small objects, we propose a lightweight context convolution block based on dilated convolutions and integrate it into the convolutional neural network (CNN). We integrate dynamic convolution blocks during the feature fusion step to enhance the high-level feature upsampling. An attention mechanism is employed to focus on the salient features of objects. We have conducted a series of experiments to validate the effectiveness of our proposed model. Notably, the proposed model achieved a 3.5% mean average precision (mAP) improvement on the satellite object detection dataset. Another feature of our approach is lightweight design. We employ group convolution to reduce the computational cost in the proposed contextual convolution module. Compared to the baseline model, our method reduces the number of parameters by 30%, computational cost by 34%, and an FPS rate close to the baseline model. We also validate the detection results through a series of visualizations. Full article
Show Figures

Figure 1

18 pages, 3383 KiB  
Article
Vehicle Target Recognition in SAR Images with Complex Scenes Based on Mixed Attention Mechanism
by Tao Tang, Yuting Cui, Rui Feng and Deliang Xiang
Information 2024, 15(3), 159; https://doi.org/10.3390/info15030159 - 11 Mar 2024
Viewed by 902
Abstract
With the development of deep learning in the field of computer vision, convolutional neural network models and attention mechanisms have been widely applied in SAR image target recognition. The improvement of convolutional neural network attention in existing SAR image target recognition focuses on [...] Read more.
With the development of deep learning in the field of computer vision, convolutional neural network models and attention mechanisms have been widely applied in SAR image target recognition. The improvement of convolutional neural network attention in existing SAR image target recognition focuses on spatial and channel information but lacks research on the relationship and recognition mechanism between spatial and channel information. In response to this issue, this article proposes a hybrid attention module and introduces a Mixed Attention (MA) mechanism module in the MobileNetV2 network. The proposed MA mechanism fully considers the comprehensive calculation of spatial attention (SPA), channel attention (CHA), and coordinated attention (CA). It can input feature maps for comprehensive weighting to enhance the features of the regions of interest, in order to improve the recognition rate of vehicle targets in SAR images.The superiority of our algorithm was verified through experiments on the MSTAR dataset. Full article
Show Figures

Figure 1

13 pages, 22251 KiB  
Article
Deep Learning Models for Waterfowl Detection and Classification in Aerial Images
by Yang Zhang, Yuan Feng, Shiqi Wang, Zhicheng Tang, Zhenduo Zhai, Reid Viegut, Lisa Webb, Andrew Raedeke and Yi Shang
Information 2024, 15(3), 157; https://doi.org/10.3390/info15030157 - 11 Mar 2024
Viewed by 904
Abstract
Waterfowl populations monitoring is essential for wetland conservation. Lately, deep learning techniques have shown promising advancements in detecting waterfowl in aerial images. In this paper, we present performance evaluation of several popular supervised and semi-supervised deep learning models for waterfowl detection in aerial [...] Read more.
Waterfowl populations monitoring is essential for wetland conservation. Lately, deep learning techniques have shown promising advancements in detecting waterfowl in aerial images. In this paper, we present performance evaluation of several popular supervised and semi-supervised deep learning models for waterfowl detection in aerial images using four new image datasets containing 197,642 annotations. The best-performing model, Faster R-CNN, achieved 95.38% accuracy in terms of mAP. Semi-supervised learning models outperformed supervised models when the same amount of labeled data was used for training. Additionally, we present performance evaluation of several deep learning models on waterfowl classifications on aerial images using a new real-bird classification dataset consisting of 6,986 examples and a new decoy classification dataset consisting of about 10,000 examples per category of 20 categories. The best model achieved accuracy of 91.58% on the decoy dataset and 82.88% on the real-bird dataset. Full article
Show Figures

Figure 1

21 pages, 4426 KiB  
Article
Improved Detection Method for Micro-Targets in Remote Sensing Images
by Linhua Zhang, Ning Xiong, Wuyang Gao and Peng Wu
Information 2024, 15(2), 108; https://doi.org/10.3390/info15020108 - 12 Feb 2024
Viewed by 1146
Abstract
With the exponential growth of remote sensing images in recent years, there has been a significant increase in demand for micro-target detection. Recently, effective detection methods for small targets have emerged; however, for micro-targets (even fewer pixels than small targets), most existing methods [...] Read more.
With the exponential growth of remote sensing images in recent years, there has been a significant increase in demand for micro-target detection. Recently, effective detection methods for small targets have emerged; however, for micro-targets (even fewer pixels than small targets), most existing methods are not fully competent in feature extraction, target positioning, and rapid classification. This study proposes an enhanced detection method, especially for micro-targets, in which a combined loss function (consisting of NWD and CIOU) is used instead of a singular CIOU loss function. In addition, the lightweight Content-Aware Reassembly of Features (CARAFE) replaces the original bilinear interpolation upsampling algorithm, and a spatial pyramid structure is added into the network model’s small target layer. The proposed algorithm undergoes training and validation utilizing the benchmark dataset known as AI-TOD. Compared to speed-oriented YOLOv7-tiny, the mAP0.5 and mAP0.5:0.95 of our improved algorithm increased from 42.0% and 16.8% to 48.7% and 18.9%, representing improvements of 6.7% and 2.1%, respectively, while the detection speed was almost equal to that of YOLOv7-tiny. Furthermore, our method was also tested on a dataset of multi-scale targets, which contains small targets, medium targets, and large targets. The results demonstrated that mAP0.5:0.95 increased from “9.8%, 54.8%, and 68.2%” to “12.6%, 55.6%, and 70.1%” for detection across different scales, indicating improvements of 2.8%, 0.8%, and 1.9%, respectively. In summary, the presented method improves detection metrics for micro-targets in various scenarios while satisfying the requirements of detection speed in a real-time system. Full article
Show Figures

Figure 1

13 pages, 5546 KiB  
Article
Three-Stage MPViT-DeepLab Transfer Learning for Community-Scale Green Infrastructure Extraction
by Hang Li, Shengjie Zhao and Hao Deng
Information 2024, 15(1), 15; https://doi.org/10.3390/info15010015 - 26 Dec 2023
Viewed by 1109
Abstract
The extraction of community-scale green infrastructure (CSGI) poses challenges due to limited training data and the diverse scales of the targets. In this paper, we reannotate a training dataset of CSGI and propose a three-stage transfer learning method employing a novel hybrid architecture, [...] Read more.
The extraction of community-scale green infrastructure (CSGI) poses challenges due to limited training data and the diverse scales of the targets. In this paper, we reannotate a training dataset of CSGI and propose a three-stage transfer learning method employing a novel hybrid architecture, MPViT-DeepLab, to help us focus on CSGI extraction and improve its accuracy. In MPViT-DeepLab, a Multi-path Vision Transformer (MPViT) serves as the feature extractor, feeding both coarse and fine features into the decoder and encoder of DeepLabv3+, respectively, which enables pixel-level segmentation of CSGI in remote sensing images. Our method achieves state-of-the-art results on the reannotated dataset. Full article
Show Figures

Figure 1

Back to TopTop