remotesensing-logo

Journal Browser

Journal Browser

Multi-Dimensional Radar Sensing: Systems, Algorithms, and Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (20 December 2023) | Viewed by 16982

Special Issue Editors


E-Mail Website
Guest Editor
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: radar system and sensing; SAR imaging and application; object identification
Special Issues, Collections and Topics in MDPI journals
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: radar signal processing; multi-dimensional radar sensing; radar imaging and applications; inverse problems; compressed sensing

E-Mail Website
Guest Editor
State Key Laboratory of Millimeter Waves, School of Information Science and Engineering, Southeast University, Nanjing 210096, China
Interests: SAR/ISAR imaging; InSAR signal processing; millimeter waves radar
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, College of Engineering, Design and Physical Sciences, Brunel University London, Uxbridge UB8 3PH, UK
Interests: millimeter-wave/terahertz (THz) security detection; sparse imaging; antennas; arrays; engineering optimization; wireless communication

Special Issue Information

Dear Colleagues,

Radar is an irreplaceable sensor in many civil and military applications. It is able to reveal the target information without being affected by challenging weather conditions. In recent decades, multi-dimensional radar sensing techniques show promising results by using multiple frequencies, polarizations, and channels. However, original and innovative contributions on exploring the system potential, improving the algorithm performance, and addressing the key challenges, are still highly required. Recent evolutions in terms of computationally efficient algorithms for high-dimensional data, robust sensing systems with low complexity, and AI-aided radar sensing applications point out the main directions in multi-dimensional radar sensing. For instance, machine learning techniques dramatically advanced the state-of-the-art for many applications by leveraging the nature of adaptive feature representations. Motivated by this clue, intelligent radar perception/sensing has been widely concerned by many academies, research institutes, and space agencies, but the powerful learning framework still remain to be further exploited in the area of multi-dimensional radar sensing. Otherwise, compressed sensing has brought revolutionary breakthroughs for accurately reconstructing sparse signals. It demonstrates that the signal can be recovered with a sub-Nyquist sampling rate, promising it is sparse or compressible. That gives us a promising way to simplify the sensing systems. In addition, many of the latest technologies, new ideas are expected to be used in this field. With this special issue, we aim to compile advanced research outcomes which specifically address various multi-dimensional radar sensing problems in terms of systems, algorithms, and applications. Papers for discussing the major challenges, latest developments, and recent advances in this area are highly welcomed.

Potential topics include, but are not limited to, the following points:

  • Multi-dimensional Radar Sensing: Advances in systems and algorithms;
  • Novel applications on multi-dimensional radar sensing;
  • MIMO and multistatic/distributed radar systems, schemes, and data processing techniques;
  • Three-dimensional SAR imaging with artificial intelligence and machine learning-based approaches;
  • Three-dimensional object detection with advanced techniques;
  • Deformation monitoring, polarimetric SAR image classification;
  • Object reconstruction from multidimensional radar point clouds;
  • Advanced data visualization techniques of multi-dimensional radars;
  • Image processing and image fusion for multi-sensor data;
  • Simultaneous localization and mapping (SLAM) with multi-dimensional radars;
  • Millimeter wave radar, Terahertz radar, and LIDAR techniques;
  • Advances in radar system implementation including waveform design, hardware design;
  • Reviews, techniques, designs or demos addressing future radar developments;
  • Novel sensing or imaging techniques potential for multi-dimensional radar applications;

Dr. Shunjun Wei
Dr. Mou Wang
Dr. Gang Xu
Dr. Shaoqing Hu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multi-dimensional radar sensing
  • imaging and sensing algorithms
  • object detection and recognition
  • 3-D image and point cloud processing
  • simultaneous localization and mapping
  • high-dimensional data visualization
  • multi-sensors image fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 7881 KiB  
Article
Improving Out-of-Distribution Generalization in SAR Image Scene Classification with Limited Training Samples
by Zhe Chen, Zhiquan Ding, Xiaoling Zhang, Xin Zhang and Tianqi Qin
Remote Sens. 2023, 15(24), 5761; https://doi.org/10.3390/rs15245761 - 17 Dec 2023
Viewed by 988
Abstract
For practical maritime SAR image classification tasks with special imaging platforms, scenes to be classified are often different from those in the training sets. The quantity and diversity of the available training data can also be extremely limited. This problem of out-of-distribution (OOD) [...] Read more.
For practical maritime SAR image classification tasks with special imaging platforms, scenes to be classified are often different from those in the training sets. The quantity and diversity of the available training data can also be extremely limited. This problem of out-of-distribution (OOD) generalization with limited training samples leads to a sharp drop in the performance of conventional deep learning algorithms. In this paper, a knowledge-guided neural network (KGNN) model is proposed to overcome these challenges. By analyzing the saliency features of various maritime SAR scenes, universal knowledge in descriptive sentences is summarized. A feature integration strategy is designed to assign the descriptive knowledge to the ResNet-18 backbone. Both the individual semantic information and the inherent relations of the entities in SAR images are addressed. The experimental results show that our KGNN method outperforms conventional deep learning models in OOD scenarios with varying training sample sizes and achieves higher robustness in handling distributional shifts caused by weather conditions, terrain type, and sensor characteristics. In addition, the KGNN model converges within many fewer epochs during training. The performance improvement indicates that the KGNN model learns representations guided by beneficial properties for ODD generalization with limited training samples. Full article
Show Figures

Graphical abstract

25 pages, 9041 KiB  
Article
MuA-SAR Fast Imaging Based on UCFFBP Algorithm with Multi-Level Regional Attention Strategy
by Fanyun Xu, Rufei Wang, Yulin Huang, Deqing Mao, Jianyu Yang, Yongchao Zhang and Yin Zhang
Remote Sens. 2023, 15(21), 5183; https://doi.org/10.3390/rs15215183 - 30 Oct 2023
Cited by 1 | Viewed by 955
Abstract
Multistatic airborne SAR (MuA-SAR) benefits from the ability to flexibly adjust the positions of multiple transmitters and receivers in space, which can shorten the synthetic aperture time to achieve the required resolution. To ensure both imaging efficiency and quality of different system spatial [...] Read more.
Multistatic airborne SAR (MuA-SAR) benefits from the ability to flexibly adjust the positions of multiple transmitters and receivers in space, which can shorten the synthetic aperture time to achieve the required resolution. To ensure both imaging efficiency and quality of different system spatial configurations and trajectories, the fast factorized back projection (FFBP) algorithm is proposed. However, if the FFBP algorithm based on polar coordinates is directly applied to the MuA-SAR system, the interpolation in the recursive fusion process will bring the problem of redundant calculations and error accumulation, leading to a sharp decrease in imaging efficiency and quality. In this paper, a unified Cartesian fast factorized back projection (UCFFBP) algorithm with a multi-level regional attention strategy is proposed for MuA-SAR fast imaging. First, a global Cartesian coordinate system (GCCS) is established. Through designing the rotation mapping matrix and phase compensation factor, data from different bistatic radar pairs can be processed coherently and efficiently. In addition, a multi-level regional attention strategy based on maximally stable extremal regions (MSER) is proposed. In the recursive fusion process, only the suspected target regions are paid more attention and segmented for coherent fusion at each fusion level, which further improves efficiency. The proposed UCFFBP algorithm ensures both the quality and efficiency of MuA-SAR imaging. Simulation experiments verified the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

29 pages, 4361 KiB  
Article
Scatterer-Level Time-Frequency-Frequency Rate Representation for Micro-Motion Identification
by Honglei Zhang, Wenpeng Zhang, Yongxiang Liu, Wei Yang and Shaowei Yong
Remote Sens. 2023, 15(20), 4917; https://doi.org/10.3390/rs15204917 - 11 Oct 2023
Cited by 2 | Viewed by 912
Abstract
Radar micro-motion signatures help to judge the target’s motion state and threat level, which plays a vital role in space situational awareness. Most of the existing micro-motion feature extraction methods derived from time-frequency (TF) representation cannot simultaneously satisfy the requirements of high resolution [...] Read more.
Radar micro-motion signatures help to judge the target’s motion state and threat level, which plays a vital role in space situational awareness. Most of the existing micro-motion feature extraction methods derived from time-frequency (TF) representation cannot simultaneously satisfy the requirements of high resolution and multiple component representation, which has limitations on processing intersected multi-component micro-motion signals. Meanwhile, as the micro-motion features extracted from the TF spectrograms only focus on the global characteristics of the targets and ignore the physical properties of micro-motion components, it leads to poor performance in micro-motion discrimination. To address these challenges, we empirically observed a decrease in the probability of intersection between the components within the time-frequency-frequency rate (TFFR) space, where components appeared as separated and non-intersecting spatial trajectories. This observation facilitates the extraction and association of multiple components. Given the differences in modulation laws among various micro-motions in the TFFR space, we introduced a novel micro-motion identification method based on scatterer-level TFFR representation. Our experimental evaluations of different targets and micro-motion types demonstrate the efficacy and robustness of this proposed method. This method not only underscores the separability of signal components but also expands the scope of micro-motion discrimination within the TFFR domain. Full article
Show Figures

Graphical abstract

23 pages, 6893 KiB  
Article
DASANet: A 3D Object Detector with Density-and-Sparsity Feature Aggregation
by Qiang Zhang and Dongdong Wei
Remote Sens. 2023, 15(18), 4587; https://doi.org/10.3390/rs15184587 - 18 Sep 2023
Viewed by 1085
Abstract
In the field of autonomous driving and robotics, 3D object detection is a difficult, but important task. To improve the accuracy of detection, LiDAR, which collects the 3D point cloud of a scene, is updated constantly. But the density of the collected 3D [...] Read more.
In the field of autonomous driving and robotics, 3D object detection is a difficult, but important task. To improve the accuracy of detection, LiDAR, which collects the 3D point cloud of a scene, is updated constantly. But the density of the collected 3D points is low, and its distribution is unbalanced in the scene, which influences the accuracy of 3D object detectors in regards to object location and identification. Although corresponding high-resolution scene images from cameras can be used as supplemental information, poor fusion strategies can result in decreased accuracy compared with that of LiDAR-point-only detectors. Thus, to improve the detection performance for the classification, localization, and even boundary location of 3D objects, a two-stage detector with density-and-sparsity feature aggregation, called DASANet, is proposed in this paper. In the first stage, dense pseudo point clouds are generated with images from cameras and are used to obtain the initial proposals. In the second stage, two novel feature aggregation modules are designed to fuse LiDAR point information and pseudo point information, which refines the semantic and detailed representation of the feature maps. To supplement the semantic information of the highest-scale LiDAR features for object localization and classification, a triple differential information supplement (TDIS) module is presented to extract the LiDAR-pseudo differential features and enhance them in spatial, channel, and global dimensions. To increase the detailed information of the LiDAR features for object boundary location, a Siamese three-dimension coordinate attention (STCA) module is presented to extract stable LiDAR and pseudo point cloud features with a Siamese encoder and fuse these features using a three-dimension coordinate attention. Experiments using the KITTI Vision Benchmark Suite demonstrate the improved performance of our DASANet in regards to the localization and boundary location of objects. The ablation studies demonstrate the effectiveness of the TDIS and the STCA modules. Full article
Show Figures

Graphical abstract

21 pages, 4779 KiB  
Article
Optimizing an Algorithm Designed for Sparse-Frequency Waveforms for Use in Airborne Radars
by Ming Hou, Wenchong Xie, Yuanyi Xiong, Hu Li, Qizhe Qu and Zhenshuo Lei
Remote Sens. 2023, 15(17), 4322; https://doi.org/10.3390/rs15174322 - 1 Sep 2023
Viewed by 787
Abstract
Low-frequency bands are an important way to realize stealth target detection for airborne radars. However, in a complex electromagnetic environment; when low-frequency airborne radar operates over land, it will inevitably encounter a lot of unintentional communication and intentional interference, while effective suppression of [...] Read more.
Low-frequency bands are an important way to realize stealth target detection for airborne radars. However, in a complex electromagnetic environment; when low-frequency airborne radar operates over land, it will inevitably encounter a lot of unintentional communication and intentional interference, while effective suppression of interference can not be achieved only through the adaptive processing of the receiver. To solve this problem, this paper proposes optimizing an algorithm designed for sparse-frequency waveforms for use in airborne radars. The algorithm establishes a joint objective function based on the criteria of minimizing waveform energy in the spectrum stopband and minimizing the integrated sidelobe level of specified range cells. The waveform is optimized by a cyclic iterative algorithm based on the Fast Fourier Transform (FFT) operation. It can ensure the frequency domain stopband constraint to realize the effective suppression of main-lobe interference while forming lower-range sidelobes at specified range cells to improve the ability to detect dim targets. Theoretical analysis and simulation results have shown that the algorithm has good anti-interference performance. Full article
Show Figures

Graphical abstract

22 pages, 6843 KiB  
Article
Monopulse Parameter Estimation for FDA-MIMO Radar under Mainlobe Deception Jamming
by Hao Chen, Rongfeng Li, Hui Chen, Qizhe Qu, Bilei Zhou, Binbin Li and Yongliang Wang
Remote Sens. 2023, 15(16), 3947; https://doi.org/10.3390/rs15163947 - 9 Aug 2023
Viewed by 1047
Abstract
Multiple input multiple output with frequency diversity array (FDA-MIMO) radar has unique advantages in mainlobe deception jamming suppression and target location. However, if the training sample contains the target signal, it will lead to poor jamming suppression performance and large target measurement error. [...] Read more.
Multiple input multiple output with frequency diversity array (FDA-MIMO) radar has unique advantages in mainlobe deception jamming suppression and target location. However, if the training sample contains the target signal, it will lead to poor jamming suppression performance and large target measurement error. To deal with the problem, a method of coarse target location in the time domain is proposed based on the cumulative sampling analysis. Taking full advantages of the strongest correlation characteristic between the expected steering vector and the true target, the feature vector and feature value corresponding to the true target are found after feature decomposition. The time domain location of the target is roughly estimated during the cumulative sampling analysis from near to far. Then, a pure jamming training sample can be obtained by avoiding the location. Noise subspace projection algorithm is used to measure the angle and range of the target while suppressing mainlobe jamming. The simulation results show that the proposed method can roughly estimate the target location in the time domain when the mainlobe deception jamming completely covers the target. Compared with conventional methods, the performance of jamming suppression and target localization error are closer to the performance of ideal sampling. Full article
Show Figures

Figure 1

21 pages, 5174 KiB  
Article
Sea Clutter Amplitude Prediction via an Attention-Enhanced Seq2Seq Network
by Qizhe Qu, Hao Chen, Zhenshuo Lei, Binbin Li, Qinglei Du and Yongliang Wang
Remote Sens. 2023, 15(13), 3234; https://doi.org/10.3390/rs15133234 - 22 Jun 2023
Cited by 3 | Viewed by 1411
Abstract
Sea clutter is a kind of ubiquitous interference in sea-detecting radars, which will definitely influence target detection. An accurate sea clutter prediction method is supposed to be beneficial while existing prediction methods are based on the one-step-ahead prediction. In this paper, a sea [...] Read more.
Sea clutter is a kind of ubiquitous interference in sea-detecting radars, which will definitely influence target detection. An accurate sea clutter prediction method is supposed to be beneficial while existing prediction methods are based on the one-step-ahead prediction. In this paper, a sea clutter prediction network (SCPNet) is proposed to achieve the k-step-ahead prediction based on the characteristics of sea clutter. The SCPNet takes a sequence-to-sequence (Seq2Seq) structure as the backbone, and a simple self-attention module is employed to enhance the ability of adaptive feature selections. The SCPNet takes the normalized amplitudes of sea clutter as inputs and is capable of predicting an output sequence with a length of k; the phase space reconstruction theory is also used to find the optimized input length of the sea clutter sequence. Results with the sea-detecting radar data-sharing program (SDRDSP) database show the mean square error of the proposed method is 1.48 × 10−5 and 8.76 × 10−3 in the one-step-ahead prediction and the eight-step-ahead prediction, respectively. Compared with four existing methods, the proposed method achieves the best prediction performance. Full article
Show Figures

Graphical abstract

20 pages, 2625 KiB  
Article
MVFRnet: A Novel High-Accuracy Network for ISAR Air-Target Recognition via Multi-View Fusion
by Xiuhe Li, Jinhe Ran, Yanbo Wen, Shunjun Wei and Wei Yang
Remote Sens. 2023, 15(12), 3052; https://doi.org/10.3390/rs15123052 - 10 Jun 2023
Cited by 3 | Viewed by 1681
Abstract
Inverse Synthetic Aperture Radar (ISAR) is a promising technique for air target imaging and recognition. However, the traditional monostatic ISAR only can provide partial features of the observed target, which is a challenge for high-accuracy recognition. In this paper, to improve the recognition [...] Read more.
Inverse Synthetic Aperture Radar (ISAR) is a promising technique for air target imaging and recognition. However, the traditional monostatic ISAR only can provide partial features of the observed target, which is a challenge for high-accuracy recognition. In this paper, to improve the recognition accuracy of air targets, we propose a novel recognition network based on multi-view ISAR imaging and fusion, called Multi-View Fusion Recognition network (MVFRnet). The main structure of MVFRnet consists of two components, the image fusion module and the target recognition module. The fusion module is used for multi-view ISAR data and image preprocessing and mainly performs imaging spatial match, image registration, and weighted fusion. The recognition network consists of the Skip Connect Unit and the Gated Channel Transformation (GCT) attention module, where the Skip Connect Unit ensures the extraction of global depth features of the image and the attention module enhances the perception of shallow contour features of the image. In addition, MVFRnet has a strong perception of image details and suppresses the effect of noise. Finally, simulated and real data are used to verify the effectiveness of the proposed scheme. Multi-view ISAR echoes of six types of aircraft are produced by electromagnetic simulation software. In addition, we also build a millimeter wave ground-based bistatic ISAR experiment system and collect multi-view data from an aircraft model. The simulation and experiment results demonstrate that the proposed scheme can obtain a higher recognition accuracy compared to other state-of-the-art methods. The recognition accuracy can be improved by approximately 30% compared with traditional monostatic recognition. Full article
Show Figures

Figure 1

23 pages, 10861 KiB  
Article
MT-FANet: A Morphology and Topology-Based Feature Alignment Network for SAR Ship Rotation Detection
by Qianqian Liu, Dong Li, Renjie Jiang, Shuang Liu, Hongqing Liu and Suqi Li
Remote Sens. 2023, 15(12), 3001; https://doi.org/10.3390/rs15123001 - 8 Jun 2023
Cited by 2 | Viewed by 1417
Abstract
In recent years, ship target detection in synthetic aperture radar (SAR) images has significantly progressed due to the rapid development of deep learning (DL). However, since only the spatial feature information of ship targets is utilized, the current DL-based SAR ship detection approaches [...] Read more.
In recent years, ship target detection in synthetic aperture radar (SAR) images has significantly progressed due to the rapid development of deep learning (DL). However, since only the spatial feature information of ship targets is utilized, the current DL-based SAR ship detection approaches cannot achieve a satisfactory performance, especially in the case of multiscale, rotations, or complex backgrounds. To address these issues, in this paper, a novel deep-learning network for SAR ship rotation detection, called a morphology and topology-based feature alignment network, is proposed which can better exploit the morphological features and inherent topological structure information. This network consists of the following three main steps: First, deformable convolution is introduced to improve the representational ability for irregularly shaped ship targets, and subsequently, a morphology and topology feature pyramid network is developed to extract inherent topological structure information. Second, based on the aforementioned features, a rotation alignment feature head is devised for fine-grained processing as well as aligning and distinguishing the features; to enable regression prediction of rotated bounding boxes; and to adopt a parameter-sharing mechanism to improve detection efficiency. Therefore, utilizing morphological and inherent topological structural information enables a superior detection performance to be achieved. Finally, we evaluate the effectiveness of the proposed method using the rotated ship detection dataset in SAR images (RSDD-SAR). Our method outperforms other DL-based algorithms with fewer parameters. The overall average precision is 90.84% and recall is 92.21%. In inshore and offshore scenarios, our method performs well for the detection of multi-scale and rotation-varying ship targets, with its average precision reaching 66.87% and 95.72%, respectively. Full article
Show Figures

Graphical abstract

26 pages, 7574 KiB  
Article
Generalized Persistent Polar Format Algorithm for Fast Imaging of Airborne Video SAR
by Jiawei Jiang, Yinwei Li, Yinghao Yuan and Yiming Zhu
Remote Sens. 2023, 15(11), 2807; https://doi.org/10.3390/rs15112807 - 28 May 2023
Cited by 6 | Viewed by 2084
Abstract
As a cutting-edge research direction in the field of radar imaging, video SAR has the capability of high-resolution and persistent imaging at any time and under any weather. Video SAR requires high computational efficiency of the imaging algorithm, and PFA has become the [...] Read more.
As a cutting-edge research direction in the field of radar imaging, video SAR has the capability of high-resolution and persistent imaging at any time and under any weather. Video SAR requires high computational efficiency of the imaging algorithm, and PFA has become the preferred imaging algorithm because of its applicability to the spotlight mode and relatively high computational efficiency. However, traditional PFA also has problems, such as low efficiency and limited scene size. To address the above problems, a generalized persistent polar format algorithm, called GPPFA, is proposed for airborne video SAR imaging that is applicable to the persistent imaging requirements of airborne video SAR under multitasking conditions. Firstly, the wavenumber domain resampling characteristics of video SAR PFA are analyzed, and a generalized resampling method is proposed to obtain higher efficiency. Secondly, for the problem of scene size limitation caused by wavefront curvature error, an efficient compensation method applicable to different scene sizes is proposed. GPPFA is capable of video SAR imaging at different wavebands, different slant ranges, and arbitrary scene sizes. Point target and extended target experiments verify the effectiveness and efficiency of the proposed method. Full article
Show Figures

Figure 1

20 pages, 8306 KiB  
Article
Geolocation Accuracy Validation of High-Resolution SAR Satellite Images Based on the Xianning Validation Field
by Boyang Jiang, Xiaohuan Dong, Mingjun Deng, Fangqi Wan, Taoyang Wang, Xin Li, Guo Zhang, Qian Cheng and Shuying Lv
Remote Sens. 2023, 15(7), 1794; https://doi.org/10.3390/rs15071794 - 28 Mar 2023
Cited by 2 | Viewed by 3151
Abstract
The geolocation accuracy of Synthetic Aperture Radar (SAR) images is crucial for their application in various industries. Five high-resolution SAR satellites, namely ALOS, TerraSAR-X, Cosmo-SkyMed, RadarSat-2, and Chinese YG-3, provide a vast amount of image data for research purposes, although their geometric accuracies [...] Read more.
The geolocation accuracy of Synthetic Aperture Radar (SAR) images is crucial for their application in various industries. Five high-resolution SAR satellites, namely ALOS, TerraSAR-X, Cosmo-SkyMed, RadarSat-2, and Chinese YG-3, provide a vast amount of image data for research purposes, although their geometric accuracies differ despite similar resolutions. To evaluate and compare the geometric accuracy of these satellites under the same ground control reference, a validation field was established in Xianning, China. The rational function model (RFM) was used to analyze the geometric performance of the five satellites based on the Xianning validation field. The study showed that each image could achieve sub-pixel positioning accuracy in range and azimuth direction when four ground control points (GCPs) were placed in the corners, resulting in a root mean square error (RMSE) of 1.5 pixels. The study also highlighted the effectiveness of an automated GCP-matching approach to mitigate manual identification of points in SAR images, and results demonstrate that the five SAR satellite images can all achieve sub-pixel positioning accuracy in range and azimuth direction when four GCPs are used. Overall, the verification results provide a reference for SAR satellite systems’ designs, calibrations, and various remote sensing activities. Full article
Show Figures

Figure 1

Back to TopTop