remotesensing-logo

Journal Browser

Journal Browser

Artificial Intelligence-Based Sensor Data Processing for Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 15 August 2025 | Viewed by 8596

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
Interests: artificial intelligence; radar signal processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Radio Science and Information Communication Engineering, Chungnam National University, Daejeon 34134, Republic of Korea
Interests: machine learning using radar signals; distributed radar system
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue deals with the various artificial intelligence algorithms that can be used in remote sensing. In particular, it will cover signal and image processing techniques and sensor fusion systems for sensors widely used in remote sensing, such as cameras, lidar, and radar. It will also introduce artificial intelligence and deep learning-based methods for this purpose.

Including sensing in indoor and outdoor environments, this Special Issue will introduce research related to remote sensing in environments such as ground and space. It also aims to cover various artificial intelligence-based algorithms related to target detection, tracking, recognition, and identification techniques. Artificial intelligence algorithms can be applied in many areas of remote sensing, and studies on various datasets and experimental results will also be comprehensively covered.

Our suggested themes and article types for submissions including but not limited to:

  • Artificial intelligence/deep learning for remote sensing;
  • Sensors (e.g., camera, lidar, and radar) for remote sensing;
  • Fusion of heterogeneous sensor data;
  • Datasets for AI and deep learning;
  • AI-based signal/image processing for remote sensing.

You may choose our Joint Special Issue in Sensors.

Prof. Dr. Seongwook Lee
Dr. Byung-Kwan Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • artificial intelligence/deep learning
  • sensors (e.g., camera, lidar, radar)
  • sensor fusion
  • signal/image processing
  • target detection and tracking
  • target recognition and classification
  • image segmentation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 4206 KiB  
Article
EOST-LSTM: Long Short-Term Memory Model Combined with Attention Module and Full-Dimensional Dynamic Convolution Module
by Guangxin He, Wei Wu, Jing Han, Jingjia Luo and Lei Lei
Remote Sens. 2025, 17(6), 1103; https://doi.org/10.3390/rs17061103 - 20 Mar 2025
Viewed by 285
Abstract
In the field of weather forecasting, improving the accuracy of nowcasting is a highly researched topic, and radar echo extrapolation technology plays a crucial role in this process. Aiming to address the limitations of existing deep learning methods in radar echo extrapolation, this [...] Read more.
In the field of weather forecasting, improving the accuracy of nowcasting is a highly researched topic, and radar echo extrapolation technology plays a crucial role in this process. Aiming to address the limitations of existing deep learning methods in radar echo extrapolation, this paper proposes a spatio-temporal long short-term memory (LSTM) network model that integrates an attention mechanism and the full-dimensional dynamic convolution technique. The multi-scale spatial and temporal features of radar images can be fully extracted by an efficient multi-scale attention module to enhance the model’s ability to perceive global and local information. The full-dimensional dynamic convolutional module introduces the dynamic attention mechanism in the spatial position and input and output channels of the convolutional kernel, adaptively adjusts the weight of the convolutional kernel, and improves the flexibility and efficiency of feature extraction. Combined with the network constructed by the above modules, the accuracy and time dependence of the model for predicting the strong echo region are significantly improved. Our experiments based on Jiangsu meteorological radar data show that the model achieved excellent results in terms of the Critical Success Index (CSI) and Heidke Skill Score (HSS), which show its efficiency and stability in predicting radar echo, especially under the condition of a high 35 dBZ threshold, and its prediction performance improved significantly. It provides an effective solution for fine short-term impending precipitation forecasting. Full article
Show Figures

Figure 1

20 pages, 4367 KiB  
Article
A Self-Supervised Method of Suppressing Interference Affected by the Varied Ambient Magnetic Field in Magnetic Anomaly Detection
by Yizhen Wang, Qi Han, Dechen Zhan and Qiong Li
Remote Sens. 2025, 17(3), 479; https://doi.org/10.3390/rs17030479 - 30 Jan 2025
Viewed by 523
Abstract
Airborne magnetic anomaly detection is an important passive remote sensing technique. However, since the magnetic field caused by the aircraft interferes with the detection accuracy, this part of interference should be eliminated by an aeromagnetic compensation method. Most existing compensation methods assume that [...] Read more.
Airborne magnetic anomaly detection is an important passive remote sensing technique. However, since the magnetic field caused by the aircraft interferes with the detection accuracy, this part of interference should be eliminated by an aeromagnetic compensation method. Most existing compensation methods assume that the ambient magnetic field is uniform when calculating the compensation model parameters. However, as the ambient magnetic field is actually not uniform and varies with the aircraft location, the solved parameters ignore the part of aircraft interference related to the varied ambient magnetic field. Although some of the latest deep learning-based aeromagnetic compensation methods avoid the assumption of uniformity of the ambient magnetic field, the insufficient supervision leads to a poor model generalization. To address these limitations, we propose a self-supervised compensation method. The proposed method utilizes a network to separate the total measured magnetic field into the ambient magnetic field part and the aircraft magnetic field part. By doing so, the method avoids the influence of the uniform ambient magnetic field assumption and enhances the model generalization. In addition, we introduce an improvement ratio loss function to distinguish the aircraft magnetic field from the ambient magnetic field when updating the model parameters. The proposed method is verified using measurement data from real flights. The experimental results indicate that the proposed method significantly outperforms state-of-the-art methods in real flights compensation. Full article
Show Figures

Figure 1

22 pages, 18328 KiB  
Article
A Three-Branch Pansharpening Network Based on Spatial and Frequency Domain Interaction
by Xincan Wen, Hongbing Ma and Liangliang Li
Remote Sens. 2025, 17(1), 13; https://doi.org/10.3390/rs17010013 - 24 Dec 2024
Viewed by 633
Abstract
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite [...] Read more.
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite significant developments achieved by deep learning-based pansharpening methods over traditional approaches, most existing techniques either fail to account for the modal differences between LRMS and PAN images, relying on direct concatenation, or use similar network structures to extract spectral and spatial information. Additionally, many methods neglect the extraction of common features between LRMS and PAN images and lack network architectures specifically designed to extract spectral features. To address these limitations, this study proposed a novel three-branch pansharpening network that leverages both spatial and frequency domain interactions, resulting in improved spectral and spatial fidelity in the fusion outputs. The proposed method was validated on three datasets, including IKONOS, WorldView-3 (WV3), and WorldView-4 (WV4). The results demonstrate that the proposed method surpasses several leading techniques, achieving superior performance in both visual quality and quantitative metrics. Full article
Show Figures

Figure 1

20 pages, 1619 KiB  
Article
Contextual Attribution Maps-Guided Transferable Adversarial Attack for 3D Object Detection
by Mumuxin Cai, Xupeng Wang, Ferdous Sohel and Hang Lei
Remote Sens. 2024, 16(23), 4409; https://doi.org/10.3390/rs16234409 - 25 Nov 2024
Cited by 1 | Viewed by 810
Abstract
The study of LiDAR-based 3D object detection and its robustness under adversarial attacks has achieved great progress. However, existing adversarial attack methods mainly focus on the targeted object, which destroys the integrity of the object and makes the attack easy to perceive. In [...] Read more.
The study of LiDAR-based 3D object detection and its robustness under adversarial attacks has achieved great progress. However, existing adversarial attack methods mainly focus on the targeted object, which destroys the integrity of the object and makes the attack easy to perceive. In this work, we propose a novel adversarial attack against deep 3D object detection models named the contextual attribution maps-guided attack (CAMGA). Based on the combinations of subregions in the context area and their impact on the prediction results, contextual attribution maps can be generated. An attribution map exposes the influence of individual subregions in the context area on the detection results and narrows down the scope of the adversarial attack. Subsequently, perturbations are generated under the guidance of a dual loss, which is proposed to suppress the detection results and maintain visual imperception simultaneously. The experimental results proved that the CAMGA method achieved an attack success rate of over 68% on three large-scale datasets and 83% on the KITTI dataset. Meanwhile, the CAMGA has a transfer attack success rate of at least 50% against all four victim detectors, as they all overly rely on contextual information. Full article
Show Figures

Figure 1

19 pages, 5790 KiB  
Article
Self-Supervised Marine Noise Learning with Sparse Autoencoder Network for Generative Target Magnetic Anomaly Detection
by Shigang Wang, Xiangyuan Zhang, Yifan Zhao, Haozi Yu and Bin Li
Remote Sens. 2024, 16(17), 3263; https://doi.org/10.3390/rs16173263 - 3 Sep 2024
Cited by 2 | Viewed by 1277
Abstract
As an effective physical field feature to perceive ferromagnetic targets, magnetic anomaly is widely used in covert marine surveillance tasks. However, its practical usability is affected by the complex marine magnetic noise interference, making robust magnetic anomaly detection (MAD) quite a challenging task. [...] Read more.
As an effective physical field feature to perceive ferromagnetic targets, magnetic anomaly is widely used in covert marine surveillance tasks. However, its practical usability is affected by the complex marine magnetic noise interference, making robust magnetic anomaly detection (MAD) quite a challenging task. Recently, learning-based detectors have been widely studied for the discrimination of magnetic anomaly signal and achieve superior performance than traditional rule-based detectors. Nevertheless, learning-based detectors require abundant data for model parameter training, which are difficult to access in practical marine applications. In practice, target magnetic anomaly data are usually expensive to acquire, while rich marine magnetic noise data are readily available. Thus, there is an urgent need to develop effective models to learn discriminative features from the abundant marine magnetic noise data for newly appearing target anomaly detection. Motivated by this, in this paper we formulate MAD as a single-edge detection problem and develop a self-supervised marine noise learning approach for target anomaly classification. Specifically, a sparse autoencoder network is designed to model the marine noise and restore basis geomagnetic field from the collected noisy magnetic data. Subsequently, reconstruction error of the network is used as a statistical decision criterion to discriminate target magnetic anomaly from cluttered noise. Finally, we verify the effectiveness of the proposed approach on real sea trial data and compare it with seven state-of-the-art MAD methods on four numerical indexes. Experimental results indicate that it achieves a detection accuracy of 93.61% and has a running time of 21.06 s on the test dataset, showing superior MAD performance over its counterparts. Full article
Show Figures

Figure 1

17 pages, 26825 KiB  
Article
Efficient Target Classification Based on Vehicle Volume Estimation in High-Resolution Radar Systems
by Sanghyeok Hwangbo, Seonmin Cho, Junho Kim and Seongwook Lee
Remote Sens. 2024, 16(9), 1522; https://doi.org/10.3390/rs16091522 - 25 Apr 2024
Viewed by 1364
Abstract
In this paper, we propose a method for efficient target classification based on the spatial features of the point cloud generated by using a high-resolution radar sensor. The frequency-modulated continuous wave radar sensor can estimate the distance and velocity of a target. In [...] Read more.
In this paper, we propose a method for efficient target classification based on the spatial features of the point cloud generated by using a high-resolution radar sensor. The frequency-modulated continuous wave radar sensor can estimate the distance and velocity of a target. In addition, the azimuth and elevation angle of the target can be estimated by using a multiple-input and multiple-output antenna system. Using the estimated distance, velocity, and angle, the 3D point cloud of target can be generated. From the generated point cloud, we extract the point cloud for each individual target using the density-based spatial clustering of application with noise method and a camera mounted on the radar sensor. Then, we define the convex hull boundaries that enclose these point clouds in both 3D and 2D spaces obtained by orthogonally projecting onto the xy, yz, and zx planes. Using the vertices of convex hull, we calculate the volume of the targets and the areas in 2D spaces. Several feature points, including the calculated spatial information, are numerized and configured into feature vectors. We design an uncomplicated deep neural network classifier based on minimal input information to achieve fast and efficient classification performance. As a result, the proposed method achieved an average accuracy of 97.1%, and the time required for training was reduced compared to the method using only point cloud data and the convolutional neural network-based method. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

2 pages, 293 KiB  
Correction
Correction: Wang et al. Deep Learning-Based Cloud Detection for Optical Remote Sensing Images: A Survey. Remote Sens. 2024, 16, 4583
by Zhengxin Wang, Longlong Zhao, Jintao Meng, Yu Han, Xiaoli Li, Ruixia Jiang, Jinsong Chen and Hongzhong Li
Remote Sens. 2025, 17(8), 1359; https://doi.org/10.3390/rs17081359 - 11 Apr 2025
Viewed by 150
Abstract
There was an error in the original publication [...] Full article
Show Figures

Figure 1

31 pages, 3303 KiB  
Systematic Review
Deep Learning-Based Cloud Detection for Optical Remote Sensing Images: A Survey
by Zhengxin Wang, Longlong Zhao, Jintao Meng, Yu Han, Xiaoli Li, Ruixia Jiang, Jinsong Chen and Hongzhong Li
Remote Sens. 2024, 16(23), 4583; https://doi.org/10.3390/rs16234583 - 6 Dec 2024
Cited by 2 | Viewed by 2517 | Correction
Abstract
In optical remote sensing images, the presence of clouds affects the completeness of the ground observation and further affects the accuracy and efficiency of remote sensing applications. Especially in quantitative analysis, the impact of cloud cover on the reliability of analysis results cannot [...] Read more.
In optical remote sensing images, the presence of clouds affects the completeness of the ground observation and further affects the accuracy and efficiency of remote sensing applications. Especially in quantitative analysis, the impact of cloud cover on the reliability of analysis results cannot be ignored. Therefore, high-precision cloud detection is an important step in the preprocessing of optical remote sensing images. In the past decade, with the continuous progress of artificial intelligence, algorithms based on deep learning have become one of the main methods for cloud detection. The rapid development of deep learning technology, especially the introduction of self-attention Transformer models, has greatly improved the accuracy of cloud detection tasks while achieving efficient processing of large-scale remote sensing images. This review provides a comprehensive overview of cloud detection algorithms based on deep learning from the perspective of semantic segmentation, and elaborates on the research progress, advantages, and limitations of different categories in this field. In addition, this paper introduces the publicly available datasets and accuracy evaluation indicators for cloud detection, compares the accuracy of mainstream deep learning models in cloud detection, and briefly summarizes the subsequent processing steps of cloud shadow detection and removal. Finally, this paper analyzes the current challenges faced by existing deep learning-based cloud detection algorithms and the future development direction of the field. Full article
Show Figures

Graphical abstract

Back to TopTop