remotesensing-logo

Journal Browser

Journal Browser

Knowledge-Driven and/or Data-Driven Methods for Remote Sensing Image Processing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 13322

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Science, Xi’an Jiaotong University, Xi’an 710049, China
Interests: machine learning; hyperspectral unmixing of remote sensing images; remote sensing image fusion; data mining; intelligent computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 610051, China
Interests: hyperspectral image processing; machine learning; scientific computing

E-Mail Website
Guest Editor
Department of Mathematics, The Chinese University of Hong Kong, Shatin, NT, Hong Kong
Interests: image processing; optimization; artificial intelligence; scientific computing; computer vision; machine learning; inverse problems

E-Mail Website
Guest Editor
Faculty of Electrical and Computer Engineering, University of Iceland, Reykjavík, Iceland
Interests: hyperspcetral image processing; machine learning
Remote Sensing Laboratory (RsLab), Department of Information Engineering and Computer Science, University of Trento, Via Sommarive, 5, Povo, I-38123 Trento, Italy
Interests: remote sensing; image processing; signal processing; pattern recognition; classification and fusion of multisource remote sensing data; multi-temporal image analysis; biophysical parameter estimation

Special Issue Information

Dear Colleagues,

Remote sensing image processing plays a critical role in diverse fields such as environmental monitoring, resource management, and disaster response. However, processing and analyzing remotely sensed data can be challenging due to complex environments, limited signal-to-noise ratio, and the presence of noise and artifacts. Recently, two different approaches to remote sensing image processing have emerged: knowledge-driven and data-driven methods. Among these, the knowledge-driven methods, based on expert experience or mathematical models describing the physical processes underlying remote sensing data, show high interpretability. In contrast, data-driven methods leverage machine learning algorithms to identify correlations and patterns from observed data, which are prevalent in recent years. In particular, this Special Issue focuses on exploring the advantages and limitations of knowledge-driven and data-driven approaches and suggesting ways to combine them to boost remote sensing image processing. We are looking forward to receiving a variety of works on this topic, whether they are theoretical or heuristic. This Special Issue is expected to leverage the strengths of knowledge-driven and data-driven methods and provide valuable insights into developing better remote sensing techniques for a broad range of applications.

Topics of interest include, but are not limited to, the following points:

  • General remote sensing image processing, such as classification, object detection, segmentation, super-resolution, denoising, etc.
  • Real-world applications based on remote sensing images, such as land use mapping, vegetation analysis, and environmental monitoring.
  • Combining traditional methods and deep learning methods for remote sensing image processing and analysis.
  • Multi-modal remote sensing image processing, such as multi-modal image fusion, pan-sharpening, etc.

Prof. Dr. Junmin Liu
Prof. Dr. Xile Zhao
Prof. Dr. Tieyong Zeng
Dr. Bin Zhao
Dr. Claudia Paris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • remote sensing
  • knowledge-driven methods
  • data-driven methods

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2523 KiB  
Article
Hyperspectral Image Denoising by Pixel-Wise Noise Modeling and TV-Oriented Deep Image Prior
by Lixuan Yi, Qian Zhao and Zongben Xu
Remote Sens. 2024, 16(15), 2694; https://doi.org/10.3390/rs16152694 - 23 Jul 2024
Cited by 1 | Viewed by 1178
Abstract
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise [...] Read more.
Model-based hyperspectral image (HSI) denoising methods have attracted continuous attention in the past decades, due to their effectiveness and interpretability. In this work, we aim at advancing model-based HSI denoising, through sophisticated investigation for both the fidelity and regularization terms, or correspondingly noise and prior, by virtue of several recently developed techniques. Specifically, we formulate a novel unified probabilistic model for the HSI denoising task, within which the noise is assumed as pixel-wise non-independent and identically distributed (non-i.i.d) Gaussian predicted by a pre-trained neural network, and the prior for the HSI image is designed by incorporating the deep image prior (DIP) with total variation (TV) and spatio-spectral TV. To solve the resulted maximum a posteriori (MAP) estimation problem, we design a Monte Carlo Expectation–Maximization (MCEM) algorithm, in which the stochastic gradient Langevin dynamics (SGLD) method is used for computing the E-step, and the alternative direction method of multipliers (ADMM) is adopted for solving the optimization in the M-step. Experiments on both synthetic and real noisy HSI datasets have been conducted to verify the effectiveness of the proposed method. Full article
Show Figures

Figure 1

18 pages, 10310 KiB  
Article
A New Sparse Collaborative Low-Rank Prior Knowledge Representation for Thick Cloud Removal in Remote Sensing Images
by Dong-Lin Sun, Teng-Yu Ji and Meng Ding
Remote Sens. 2024, 16(9), 1518; https://doi.org/10.3390/rs16091518 - 25 Apr 2024
Cited by 1 | Viewed by 856
Abstract
Efficiently removing clouds from remote sensing imagery presents a significant challenge, yet it is crucial for a variety of applications. This paper introduces a novel sparse function, named the tri-fiber-wise sparse function, meticulously engineered for the targeted tasks of cloud detection and removal. [...] Read more.
Efficiently removing clouds from remote sensing imagery presents a significant challenge, yet it is crucial for a variety of applications. This paper introduces a novel sparse function, named the tri-fiber-wise sparse function, meticulously engineered for the targeted tasks of cloud detection and removal. This function is adept at capturing cloud characteristics across three dimensions, leveraging the sparsity of mode-1, -2, and -3 fibers simultaneously to achieve precise cloud detection. By incorporating the concept of tensor multi-rank, which describes the global correlation, we have developed a tri-fiber-wise sparse-based model that excels in both detecting and eliminating clouds from images. Furthermore, to ensure that the cloud-free information accurately matches the corresponding areas in the observed data, we have enhanced our model with an extended box-constraint strategy. The experiments showcase the notable success of the proposed method in cloud removal. This highlights its potential and utility in enhancing the accuracy of remote sensing imagery. Full article
Show Figures

Figure 1

22 pages, 5150 KiB  
Article
Convolutional Neural Network-Based Method for Agriculture Plot Segmentation in Remote Sensing Images
by Liang Qi, Danfeng Zuo, Yirong Wang, Ye Tao, Runkang Tang, Jiayu Shi, Jiajun Gong and Bangyu Li
Remote Sens. 2024, 16(2), 346; https://doi.org/10.3390/rs16020346 - 15 Jan 2024
Cited by 3 | Viewed by 1958
Abstract
Accurate delineation of individual agricultural plots, the foundational units for agriculture-based activities, is crucial for effective government oversight of agricultural productivity and land utilization. To improve the accuracy of plot segmentation in high-resolution remote sensing images, the paper collects GF-2 satellite remote sensing [...] Read more.
Accurate delineation of individual agricultural plots, the foundational units for agriculture-based activities, is crucial for effective government oversight of agricultural productivity and land utilization. To improve the accuracy of plot segmentation in high-resolution remote sensing images, the paper collects GF-2 satellite remote sensing images, uses ArcGIS10.3.1 software to establish datasets, and builds UNet, SegNet, DeeplabV3+, and TransUNet neural network frameworks, respectively, for experimental analysis. Then, the TransUNet network with the best segmentation effects is optimized in both the residual module and the skip connection to further improve its performance for plot segmentation in high-resolution remote sensing images. This article introduces Deformable ConvNets in the residual module to improve the original ResNet50 feature extraction network and combines the convolutional block attention module (CBAM) at the skip connection to calculate and improve the skip connection steps. Experimental results indicate that the optimized remote sensing plot segmentation algorithm based on the TransUNet network achieves an Accuracy of 86.02%, a Recall of 83.32%, an F1-score of 84.67%, and an Intersection over Union (IOU) of 86.90%. Compared to the original TransUNet network for remote sensing land parcel segmentation, whose F1-S is 81.94% and whose IoU is 69.41%, the optimized TransUNet network has significantly improved the performance of remote sensing land parcel segmentation, which verifies the effectiveness and reliability of the plot segmentation algorithm. Full article
Show Figures

Figure 1

22 pages, 6234 KiB  
Article
Radiation-Variation Insensitive Coarse-to-Fine Image Registration for Infrared and Visible Remote Sensing Based on Zero-Shot Learning
by Jiaqi Li, Guoling Bi, Xiaozhen Wang, Ting Nie and Liang Huang
Remote Sens. 2024, 16(2), 214; https://doi.org/10.3390/rs16020214 - 5 Jan 2024
Cited by 3 | Viewed by 1563
Abstract
Infrared and visible remote sensing image registration is significant for utilizing remote sensing images to obtain scene information. However, it is difficult to establish a large number of correct matches due to the difficulty in obtaining similarity metrics due to the presence of [...] Read more.
Infrared and visible remote sensing image registration is significant for utilizing remote sensing images to obtain scene information. However, it is difficult to establish a large number of correct matches due to the difficulty in obtaining similarity metrics due to the presence of radiation variation between heterogeneous sensors, which is caused by different imaging principles. In addition, the existence of sparse textures in infrared images as well as in some scenes and the small number of relevant trainable datasets also hinder the development of this field. Therefore, we combined data-driven and knowledge-driven methods to propose a Radiation-variation Insensitive, Zero-shot learning-based Registration (RIZER). First, RIZER, as a whole, adopts a detector-free coarse-to-fine registration framework, and the data-driven methods use a Transformer based on zero-shot learning. Next, the knowledge-driven methods are embodied in the coarse-level matches, where we adopt the strategy of seeking reliability by introducing the HNSW algorithm and employing a priori knowledge of local geometric soft constraints. Then, we simulate the matching strategy of the human eye to transform the matching problem into a model-fitting problem and employ a multi-constrained incremental matching approach. Finally, after fine-level coordinate fine tuning, we propose an outlier culling algorithm that only requires very few iterations. Meanwhile, we propose a multi-scene infrared and visible remote sensing image registration dataset. After testing, RIZER achieved a correct matching rate of 99.55% with an RMSE of 1.36 and had an advantage in the number of correct matches, as well as a good generalization ability for other multimodal images, achieving the best results when compared to some traditional and state-of-the-art multimodal registration algorithms. Full article
Show Figures

Graphical abstract

18 pages, 4159 KiB  
Article
Fast Thick Cloud Removal for Multi-Temporal Remote Sensing Imagery via Representation Coefficient Total Variation
by Shuang Xu, Jilong Wang and Jialin Wang
Remote Sens. 2024, 16(1), 152; https://doi.org/10.3390/rs16010152 - 29 Dec 2023
Cited by 2 | Viewed by 1475
Abstract
Although thick cloud removal is a complex task, the past decades have witnessed the remarkable development of tensor-completion-based techniques. Nonetheless, they require substantial computational resources and may suffer from checkboard artifacts. This study presents a novel technique to address this challenging task using [...] Read more.
Although thick cloud removal is a complex task, the past decades have witnessed the remarkable development of tensor-completion-based techniques. Nonetheless, they require substantial computational resources and may suffer from checkboard artifacts. This study presents a novel technique to address this challenging task using representation coefficient total variation (RCTV), which imposes a total variation regularizer on decomposed data. The proposed approach enhances cloud removal performance while effectively preserving the textures with high speed. The experimental results confirm the efficiency of our method in restoring image textures, demonstrating its superior performance compared to state-of-the-art techniques. Full article
Show Figures

Figure 1

31 pages, 19832 KiB  
Article
Classification of Urban Surface Elements by Combining Multisource Data and Ontology
by Ling Zhu, Yuzhen Lu and Yewen Fan
Remote Sens. 2024, 16(1), 4; https://doi.org/10.3390/rs16010004 - 19 Dec 2023
Viewed by 1214
Abstract
The rapid pace of urbanization and increasing demands for urban functionalities have led to diversification and complexity in the types of urban surface elements. The conventional approach of relying solely on remote sensing imagery for urban surface element extraction faces emerging challenges. Data-driven [...] Read more.
The rapid pace of urbanization and increasing demands for urban functionalities have led to diversification and complexity in the types of urban surface elements. The conventional approach of relying solely on remote sensing imagery for urban surface element extraction faces emerging challenges. Data-driven techniques, including deep learning and machine learning, necessitate a substantial number of annotated samples as prerequisites. In response, our study proposes a knowledge-driven approach that integrates multisource data with ontology to achieve precise urban surface element extraction. Within this framework, components from the EIONET Action Group on Land Monitoring in Europe matrix serve as ontology primitives, forming a shared vocabulary. The semantics of surface elements are deconstructed using these primitives, enabling the creation of specific descriptions for various types of urban surface elements by combining these primitives. Our approach integrates multitemporal high-resolution remote sensing data, network big data, and other heterogeneous data sources. It segments high-resolution images into individual patches, and for each unit, urban surface element classification is accomplished through semantic rule-based inference. We conducted experiments in two regions with varying levels of urban scene complexity, achieving overall accuracies of 93.03% and 97.35%, respectively. Through this knowledge-driven approach, our proposed method significantly enhances the classification performance of urban surface elements in complex scenes, even in the absence of sample data, thereby presenting a novel approach to urban surface element extraction. Full article
Show Figures

Figure 1

25 pages, 5631 KiB  
Article
Learn by Yourself: A Feature-Augmented Self-Distillation Convolutional Neural Network for Remote Sensing Scene Image Classification
by Cuiping Shi, Mengxiang Ding, Liguo Wang and Haizhu Pan
Remote Sens. 2023, 15(23), 5620; https://doi.org/10.3390/rs15235620 - 4 Dec 2023
Cited by 3 | Viewed by 1553
Abstract
In recent years, with the rapid development of deep learning technology, great progress has been made in remote sensing scene image classification. Compared with natural images, remote sensing scene images are usually more complex, with high inter-class similarity and large intra-class differences, which [...] Read more.
In recent years, with the rapid development of deep learning technology, great progress has been made in remote sensing scene image classification. Compared with natural images, remote sensing scene images are usually more complex, with high inter-class similarity and large intra-class differences, which makes it difficult for commonly used networks to effectively learn the features of remote sensing scene images. In addition, most existing methods adopt hard labels to supervise the network model, which makes the model prone to losing fine-grained information of ground objects. In order to solve these problems, a feature-augmented self-distilled convolutional neural network (FASDNet) is proposed. First, ResNet34 is adopted as the backbone network to extract multi-level features of images. Next, a feature augmentation pyramid module (FAPM) is designed to extract and fuse multi-level feature information. Then, auxiliary branches are constructed to provide additional supervision information. The self-distillation method is utilized between the feature augmentation pyramid module and the backbone network, as well as between the backbone network and auxiliary branches. Finally, the proposed model is jointly supervised using feature distillation loss, logits distillation loss, and cross-entropy loss. A lot of experiments are conducted on four widely used remote sensing scene image datasets, and the experimental results show that the proposed method is superior to some state-ot-the-art classification methods. Full article
Show Figures

Figure 1

17 pages, 65010 KiB  
Article
Remote Sensing Image Super-Resolution via Multi-Scale Texture Transfer Network
by Yu Wang, Zhenfeng Shao, Tao Lu, Xiao Huang, Jiaming Wang, Xitong Chen, Haiyan Huang and Xiaolong Zuo
Remote Sens. 2023, 15(23), 5503; https://doi.org/10.3390/rs15235503 - 25 Nov 2023
Cited by 3 | Viewed by 1968
Abstract
As the degradation factors of remote sensing images become increasingly complex, it becomes challenging to infer the high-frequency details of remote sensing images compared to ordinary digital photographs. For super-resolution (SR) tasks, existing deep learning-based single remote sensing image SR methods tend to [...] Read more.
As the degradation factors of remote sensing images become increasingly complex, it becomes challenging to infer the high-frequency details of remote sensing images compared to ordinary digital photographs. For super-resolution (SR) tasks, existing deep learning-based single remote sensing image SR methods tend to rely on texture information, leading to various limitations. To fill this gap, we propose a remote sensing image SR algorithm based on a multi-scale texture transfer network (MTTN). The proposed MTTN enhances the texture feature information of reconstructed images by adaptively transferring texture information according to the texture similarity of the reference image. The proposed method adopts a multi-scale texture-matching strategy, which promotes the transmission of multi-scale texture information of remote sensing images and obtains finer-texture information from more relevant semantic modules. Experimental results show that the proposed method outperforms state-of-the-art SR techniques on the Kaggle open-source remote sensing dataset from both quantitative and qualitative perspectives. Full article
Show Figures

Figure 1

Back to TopTop