remotesensing-logo

Journal Browser

Journal Browser

Pattern Recognition in Remote Sensing II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 15 January 2025 | Viewed by 6441

Special Issue Editors


E-Mail Website
Guest Editor
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
Interests: pattern recognition; deep learning; remote sensing; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Automation, Beijing Institute of Technology, Beijing 100081, China
Interests: pattern recognition; image processing; multi-modal information fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Space Information, Space Engineering University, Beijing 101416, China
Interests: remote sensing image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Laboratoire d’Informatique, du Traitement de l’Information et des Systèmes (LITIS), Normandie Université, UNIROUEN, UNIHAVRE, INSA Rouen, 76000 Rouen, France
Interests: attern recognition; autonomous navigation; information fusion; non conventional imaging; polarimetric imaging; roas scene analysis; obstacle detection; ADAS; intelligent vehicle

Special Issue Information

Dear Colleagues,

Pattern recognition is a powerful tool for remote sensing image analysis. With the development of deep learning, several remote sensing applications with cutting-edge performance have been achieved in the last decade. However, it is evident that remote sensing has been lagging behind other domains. In this context, this Special Issue encourages the submission of papers that offer recent advances and innovative solutions on the wide topic of remote sensing image analysis. In particular, topics that fall within topics including, but not limited to, the following are welcome:

  • New pattern recognition principles and their potential in remote sensing image analysis;
  • Low-level image processing techniques (e.g., denoising, enhancing, deblurring, and rectification);
  • Mid-level image processing techniques (e.g., feature extraction, feature matching, image mosaic, image fusion, super-resolution, salience detection, and change detection);
  • High-level image processing techniques (e.g., object recognition, semantic segmentation, image classification, image captioning, and image understanding);
  • Parallel computing and cloud computing techniques;
  • Light-weight network and embedding design for remote sensing processing;
  • Applications in resource management, disaster monitoring, intelligent agriculture, and smart cities.

Prof. Dr. Chunlei Huo
Dr. Zhiqiang Zhou
Dr. Lurui Xia
Dr. Samia Ainouz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • pattern recognition
  • deep learning
  • remote sensing
  • image processing
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 9451 KiB  
Article
Siamese InternImage for Change Detection
by Jing Shen, Chunlei Huo and Shiming Xiang
Remote Sens. 2024, 16(19), 3642; https://doi.org/10.3390/rs16193642 - 29 Sep 2024
Abstract
For some time, CNN was the de facto state-of-the-art method in remote sensing image change detection. Although transformer-based models have surpassed CNN-based models due to their larger receptive fields, CNNs still retain their value for their efficiency and ability to extract precise local [...] Read more.
For some time, CNN was the de facto state-of-the-art method in remote sensing image change detection. Although transformer-based models have surpassed CNN-based models due to their larger receptive fields, CNNs still retain their value for their efficiency and ability to extract precise local features. To overcome the limitations of the restricted receptive fields in standard CNNs, deformable convolution allows for dynamic adjustment of sampling locations in convolutional kernels, improving the network’s ability to model global contexts. InternImage is an architecture built upon deformable convolution as its foundational operation. Motivated by InternImage, in this paper, a CNN-based change detection vision foundation model is proposed. By introducing deformable convolution into Siamese InternImage architecture, the proposed CNN-based change detection vision foundation model is capable of capturing long-range dependencies and global information. A refinement block is utilized to merge local detail, where channel attention is incorporated. The proposed approach achieved excellent performance on the LEVIR-CD and WHU-CD datasets. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Figure 1

23 pages, 20109 KiB  
Article
ASIPNet: Orientation-Aware Learning Object Detection for Remote Sensing Images
by Ruchan Dong, Shunyao Yin, Licheng Jiao, Jungang An and Wenjing Wu
Remote Sens. 2024, 16(16), 2992; https://doi.org/10.3390/rs16162992 - 15 Aug 2024
Viewed by 499
Abstract
Remote sensing imagery poses significant challenges for object detection due to the presence of objects at multiple scales, dense target overlap, and the complexity of extracting features from small targets. This paper introduces an innovative Adaptive Spatial Information Perception Network (ASIPNet), designed to [...] Read more.
Remote sensing imagery poses significant challenges for object detection due to the presence of objects at multiple scales, dense target overlap, and the complexity of extracting features from small targets. This paper introduces an innovative Adaptive Spatial Information Perception Network (ASIPNet), designed to address the problem of detecting objects in complex remote sensing image scenes and significantly enhance detection accuracy. We first designed the core component of ASIPNet, an Adaptable Spatial Information Perception Module (ASIPM), which strengthens the feature extraction of multi-scale objects in remote sensing images by dynamically perceiving contextual background information. Secondly, To further refine the model’s accuracy in predicting oriented bounding boxes, we integrated the Skew Intersection over Union based on Kalman Filtering (KFIoU), which serves as an advanced loss function, surpassing the capabilities of the baseline model’s traditional loss function. Finally, we designed detailed experiments on the DOTAv1 and DIOR-R datasets, which are annotated with rotation, to comprehensively evaluate the performance of ASIPNet. The experimental results demonstrate that ASIPNet achieved mAP50 scores of 76.0% and 80.1%, respectively. These results not only validate the model’s effectiveness but also indicate that this method is significantly ahead of other most current state-of-the-art approaches. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Graphical abstract

22 pages, 4754 KiB  
Article
A Multi-Modality Fusion and Gated Multi-Filter U-Net for Water Area Segmentation in Remote Sensing
by Rongfang Wang, Chenchen Zhang, Chao Chen, Hongxia Hao, Weibin Li and Licheng Jiao
Remote Sens. 2024, 16(2), 419; https://doi.org/10.3390/rs16020419 - 21 Jan 2024
Cited by 3 | Viewed by 1604
Abstract
Water area segmentation in remote sensing is of great importance for flood monitoring. To overcome some challenges in this task, we construct the Water Index and Polarization Information (WIPI) multi-modality dataset and propose a multi-Modality Fusion and Gated multi-Filter U-Net (MFGF-UNet) convolutional neural [...] Read more.
Water area segmentation in remote sensing is of great importance for flood monitoring. To overcome some challenges in this task, we construct the Water Index and Polarization Information (WIPI) multi-modality dataset and propose a multi-Modality Fusion and Gated multi-Filter U-Net (MFGF-UNet) convolutional neural network. The WIPI dataset can enhance the water information while reducing the data dimensionality: specifically, the Cloud-Free Label provided in the dataset can effectively alleviate the problem of labeled sample scarcity. Since a single form or uniform kernel size cannot handle the variety of sizes and shapes of water bodies, we propose the Gated Multi-Filter Inception (GMF-Inception) module in our MFGF-UNet. Moreover, we utilize an attention mechanism by introducing a Gated Channel Transform (GCT) skip connection and integrating GCT into GMF-Inception to further improve model performance. Extensive experiments on three benchmarks, including the WIPI, Chengdu and GF2020 datasets, demonstrate that our method achieves favorable performance with lower complexity and better robustness against six competing approaches. For example, on the WIPI, Chengdu and GF2020 datasets, the proposed MFGF-UNet model achieves F1 scores of 0.9191, 0.7410 and 0.8421, respectively, with the average F1 score on the three datasets 0.0045 higher than that of the U-Net model; likewise, GFLOPS were reduced by 62% on average. The new WIPI dataset, the code and the trained models have been released on GitHub. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Figure 1

23 pages, 18133 KiB  
Article
NMS-Free Oriented Object Detection Based on Channel Expansion and Dynamic Label Assignment in UAV Aerial Images
by Yunpeng Dong, Xiaozhu Xie, Zhe An, Zhiyu Qu, Lingjuan Miao and Zhiqiang Zhou
Remote Sens. 2023, 15(21), 5079; https://doi.org/10.3390/rs15215079 - 24 Oct 2023
Cited by 1 | Viewed by 1383
Abstract
Object detection in unmanned aerial vehicle (UAV) aerial images has received extensive attention in recent years. The current mainstream oriented object detection methods for aerial images often suffer from complex network structures, slow inference speeds, and difficulties in deployment. In this paper, we [...] Read more.
Object detection in unmanned aerial vehicle (UAV) aerial images has received extensive attention in recent years. The current mainstream oriented object detection methods for aerial images often suffer from complex network structures, slow inference speeds, and difficulties in deployment. In this paper, we propose a fast and easy-to-deploy oriented detector for UAV aerial images. First, we design a re-parameterization channel expansion network (RE-Net), which enhances the feature representation capabilities of the network based on the channel expansion structure and efficient layer aggregation network structure. During inference, RE-Net can be equivalently converted to a more streamlined structure, reducing parameters and computational costs. Next, we propose DynamicOTA to adjust the sampling area and the number of positive samples dynamically, which solves the problem of insufficient positive samples in the early stages of training. DynamicOTA improves detector performance and facilitates training convergence. Finally, we introduce a sample selection module (SSM) to achieve NMS-free object detection, simplifying the deployment of our detector on embedded devices. Extensive experiments on the DOTA and HRSC2016 datasets demonstrate the superiority of the proposed approach. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Figure 1

Other

Jump to: Research

15 pages, 6520 KiB  
Technical Note
Sensing and Navigation for Multiple Mobile Robots Based on Deep Q-Network
by Yanyan Dai, Seokho Yang and Kidong Lee
Remote Sens. 2023, 15(19), 4757; https://doi.org/10.3390/rs15194757 - 28 Sep 2023
Cited by 4 | Viewed by 1149
Abstract
In this paper, a novel DRL algorithm based on a DQN is proposed for multiple mobile robots to find optimized paths. The multiple robots’ states are the inputs of the DQN. The DQN estimates the Q-value of the agents’ actions. After selecting the [...] Read more.
In this paper, a novel DRL algorithm based on a DQN is proposed for multiple mobile robots to find optimized paths. The multiple robots’ states are the inputs of the DQN. The DQN estimates the Q-value of the agents’ actions. After selecting the action with the maximum Q-value, the multiple robots’ actions are calculated and sent to them. Then, the robots will explore the area and detect the obstacles. In the area, there are static obstacles. The robots should detect the static obstacles using a LiDAR sensor. The other moving robots are recognized as dynamic obstacles that need to be avoided. The robots will give feedback on the reward and the robots’ new states. A positive reward will be given when a robot successfully arrives at its goal point. If it is in a free space, zero reward will be given. If the robot collides with a static obstacle or other robots or reaches its start point, it will receive a negative reward. Multiple robots explore safe paths to the goals at the same time, in order to improve learning efficiency. If a robot collides with an obstacle or other robots, it will stop and wait for the other robots to complete their exploration tasks. The episode will end when all robots find safe paths to reach their goals or when all of them have collisions. This collaborative behavior can reduce the risk of collisions between robots, enhance overall efficiency, and help avoid multiple robots attempting to navigate through the same unsafe path simultaneously. Moreover, storage space is used to store the optimal safe paths of all robots. Finally, the multi-robots will learn the policy to find the optimized paths to go to the goal points. The goal of the simulations and experiment is to make multiple robots efficiently and safely move to their goal points. Full article
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing II)
Show Figures

Figure 1

Back to TopTop