remotesensing-logo

Journal Browser

Journal Browser

Image Matching and Target Recognition Technologies: Prospects and Challenges

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 1012

Special Issue Editors


E-Mail Website
Guest Editor
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: real-time image processing; target recognition

E-Mail Website
Guest Editor
Faculty of Geosciences and Engineering, Southwest Jiaotong University, Chengdu 611756, China
Interests: radar signal processing; SAR target detection; marine environment; SAR GMTI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430072, China
Interests: real-time photogrammetry; 3D reconstruction; vector 3D mapping; remote sensing intelligent interpretation; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid advancement of remote sensing technology has led to an unprecedented explosion in the availability of high-resolution Earth observation data, acquired from diverse platforms including satellites, aircraft, and unmanned aerial vehicles (UAVs). This deluge of data presents significant opportunities and formidable challenges in the domains of image matching and target recognition. Image matching serves as a foundational technique for tasks such as multi-temporal analysis, three-dimensional reconstruction, and image registration, and target recognition enables the automated identification and monitoring of terrestrial objects and dynamic phenomena. The academic community continues to advance state-of-the-art methodologies—from traditional feature-based approaches to deep learning architectures—to address domain-specific challenges inherent in remote sensing imagery, including variations in illumination, seasonal transitions, perspective disparities, and complex contextual backgrounds.

In response to these challenges, this Special Issue focuses on the field of image matching and target recognition for remote sensing, aiming to gather and highlight the latest research achievements and innovative practices in the field. This includes, but is not limited to, the following topics:

  • Advanced feature detection and matching algorithms for multi-sensor remote sensing data;
  • Deep learning approaches for target recognition and detection in complex environments;
  • Multi-temporal image analysis and change detection techniques;
  • Three-dimensional reconstruction and stereo matching from aerial and satellite imagery;
  • Multi-modal data fusion for improved matching and recognition performance;
  • Large-scale image processing and efficient computational methods;
  • Few-shot object detection and recognition in remote sensing data;
  • Robust algorithms for handling varying atmospheric conditions and seasonal changes;
  • Real-time processing techniques for emergency response applications.

Prof. Dr. Jianghua Cheng
Prof. Dr. Gui Gao
Dr. Xiongwu Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • image matching
  • target recognition real-time image processing
  • machine learning
  • image feature extraction and processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 9198 KB  
Article
Towards Pseudo-Labeling with Dynamic Thresholds for Cross-View Image Geolocalization
by Yuanyuan Yuan, Jianzhong Guo, Ruoxin Zhu, Ning Li, Ziwei Li and Weiran Luo
Remote Sens. 2026, 18(6), 944; https://doi.org/10.3390/rs18060944 - 20 Mar 2026
Viewed by 159
Abstract
Cross-view image geolocalization aims to achieve accurate localization of geo-tagged images without geo-tagging by matching ground-view images with satellite images. However, there are huge imaging differences between ground and satellite viewpoints, and existing methods usually rely on a large number of accurately labeled [...] Read more.
Cross-view image geolocalization aims to achieve accurate localization of geo-tagged images without geo-tagging by matching ground-view images with satellite images. However, there are huge imaging differences between ground and satellite viewpoints, and existing methods usually rely on a large number of accurately labeled cross-view image pairs. Therefore, to address issues such as significant perspective differences, high annotation costs, and low utilization of unpaired data, this paper proposes a cross-view generation model that integrates multi-scale contrastive learning and dynamic optimization, designs a multi-scale contrast loss function to strengthen the semantic consistency between the generated images and the target domain, adaptively balances the quality and quantity of pseudo-labels according to a dynamic threshold screening mechanism, and introduces a hard-sample triplet loss to enhance the model discriminative ability. Ablation experiments on the CVUSA and CVACT datasets show that the BEV-CycleGAN+CL (Bird’s-Eye View Cycle-Consistent Generative Adversarial Network with Contrastive Learning) model proposed in this paper significantly outperforms the comparative models in PSNR, SSIM, and RMSE metrics. Specifically, on the CVACT dataset, compared with the BEV-CycleGAN, BEV, and CycleGAN baselines, PSNR increased by 2.83%, 16.02%, and 42.30%, SSIM increased by 6.12%, 8.00%, and 18.48%, and RMSE decreased by 9.28%, 15.51%, and 25.35%, respectively. Similar advantages are observed on the CVUSA dataset. Compared with current state-of-the-art models, the dynamic threshold pseudo-label localization method in this paper demonstrates overall superiority in recall metrics such as R@1, R@5, R@10, and R@1%, for example achieving an R@1 of 98.94% on CVUSA, outperforming the best comparative model, Sample4G, which reached 98.68%. This study provides innovative methodological support for disaster emergency response, high-precision map construction for autonomous driving, military reconnaissance, and other applications. Full article
Show Figures

Figure 1

30 pages, 9900 KB  
Article
Multimodal Weak Texture Remote Sensing Image Matching Based on Normalized Structural Feature Transform
by Qiang Xiong, Xiaojuan Liu, Xuefeng Zhang and Tao Ke
Remote Sens. 2026, 18(5), 775; https://doi.org/10.3390/rs18050775 - 4 Mar 2026
Viewed by 316
Abstract
Significant nonlinear radiation differences and weak texture differences exist between multimodal weak texture remote sensing images (MWTRSIs). When using traditional methods to match MWTRSIs, the low distinguishability of descriptors in weak texture regions results in poor matching performance. A robust matching method is [...] Read more.
Significant nonlinear radiation differences and weak texture differences exist between multimodal weak texture remote sensing images (MWTRSIs). When using traditional methods to match MWTRSIs, the low distinguishability of descriptors in weak texture regions results in poor matching performance. A robust matching method is proposed based on normalized structural feature transform (NSFT), which can extract spatial structural features of images while mitigating nonlinear radiation differences between weak texture regions. First, the bilateral filter is used to transform the weak texture remote sensing image into a normalized image, which not only greatly weakens the nonlinear radiation difference but also retains most of the structural information. Then, the UC-KAZE detector is designed to extract many evenly distributed feature points on the normalized image. Subsequently, a multimodal weak texture feature descriptor with rotation invariance is designed based on the self-similarity of the weak texture image. Finally, the initial correspondences are constructed by bilateral matching, and the mismatches are removed by the fast sample consensus (FSC) algorithm. We perform comparison experiments on eight types of MWTRSIs. The results show that the proposed method has good scale and rotation invariance and good resistance to nonlinear radiation differences and weak texture differences. Full article
Show Figures

Figure 1

Back to TopTop