Next Article in Journal
Remote Sensing Identification and Spatiotemporal Change Analysis of Cladophora with Different Morphologies
Next Article in Special Issue
Quasi-Dense Matching for Oblique Stereo Images through Semantic Segmentation and Local Feature Enhancement
Previous Article in Journal
Double Augmentation: A Modal Transforming Method for Ship Detection in Remote Sensing Imagery
Previous Article in Special Issue
3D Reconstruction of Ancient Buildings Using UAV Images and Neural Radiation Field with Depth Supervision
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Automatic Registration Method for Array InSAR Point Clouds in Urban Scenes

1
National Key Laboratory of Microwave Imaging Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(3), 601; https://doi.org/10.3390/rs16030601
Submission received: 10 January 2024 / Revised: 3 February 2024 / Accepted: 4 February 2024 / Published: 5 February 2024

Abstract

:
The array interferometric synthetic aperture radar (Array InSAR) system resolves shadow issues by employing two scans in opposite directions, facilitating the acquisition of a comprehensive three-dimensional representation of the observed scene. The point clouds obtained from the two scans need to be transformed into the same coordinate system using registration techniques to create a more comprehensive visual representation. However, the two-point clouds lack corresponding points and exhibit distinct geometric distortions, thereby preventing direct registration. This paper analyzes the error characteristics of array InSAR point clouds and proposes a robust registration method for array InSAR point clouds in urban scenes. It represents the 3D information of the point clouds using images, with pixel positions corresponding to the azimuth and ground range directions. Pixel intensity denotes the average height of points within the pixel. The KAZE algorithm and enhanced matching approach are used to obtain the homonymous points of two images, subsequently determining the transformation relationship between them. Experimental results with actual data demonstrate that, for architectural elements within urban scenes, the relative angular differences of registered facades are below 0.5°. As for ground elements, the Root Mean Square Error (RMSE) after registration is less than 1.5 m, thus validating the superiority of the proposed method.

1. Introduction

Recently, 3D imaging techniques have witnessed rapid development. Compared with 2D images, point clouds have the ability to capture the precise spatial structures and geometric features of objects. By analyzing and processing point clouds, valuable information such as distances, angles, and occlusion relationships between objects can be extracted, making it of great significance in applications such as robot navigation [1], map creation [2], environmental reconstruction [3], and virtual reality [4].
Laser scanning technology [5,6], photogrammetric stereo matching [7,8], and array InSAR [9] are the primary methods for acquiring point clouds. In contrast to optical sensors, SAR exhibits excellent imaging capabilities even under adverse weather conditions. By deploying multiple antennas in the across-track direction, array InSAR enables multi-angle observations of the target scene. It effectively addresses the problem of overlap between targets and terrain in 2D images, significantly enhancing the capabilities of target detection, identification, and detailed interpretation [10].
In urban scenes, microwaves emitted by radar are often obstructed by artificial facilities, leading to incomplete point clouds generated from a single scan. In practical applications, point cloud registration techniques are typically required to match and fuse point clouds acquired from different scans. Figure 1 presents a schematic diagram illustrating the acquisition of complete 3D information of urban scenes through two scans.
Two flight tests were conducted from opposing directions to image and generate point clouds. Firstly, disparities in shadow positions and the anisotropy of scatterers result in a lack of corresponding points between the two scans. Additionally, the SAR point cloud contains a substantial number of outliers, attributed partly to multiple scattering effects and partly originating from the super-resolution imaging algorithm. Subsequently, the SAR point cloud requires a transformation from the azimuth-range-elevation coordinate system to the azimuth-ground range-height coordinate system. Discrepancies in the selection of reference heights lead to conspicuous vertical and ground range offsets in the point clouds, as well as a stretching effect along the ground range direction [11], resulting, as illustrated in Figure 2. Lastly, airborne array InSAR exhibits significant changes in local incidence angles within the spatial domain, introducing supplementary geometric approximation errors [12]. In a word, the registration of array InSAR point clouds faces substantial challenges.
Conventional point cloud registration typically follows a strategy of coarse registration followed by fine registration [13]. The purpose of coarse registration is to find a suitable initial transformation that serves as a foundation for subsequent fine registration. Fine registration involves refining the initial transformation matrix through multiple iterative optimization steps to achieve a global optimum.
Coarse registration of point clouds typically involves extracting geometric features from the point cloud. These features can be categorized as point-based, line-based, and surface-based. Barnea applied the Scale-Invariant Feature Transform (SIFT) to laser point cloud registration [14]. Aiger introduced a method called Four-Point Congruent Sets (4PCS), which utilized the invariant property of the ratio of lines formed by four coplanar points, achieving global point cloud registration [15]. Compared to points, lines possess stronger geometric topological characteristics and are easier to extract. Jaw proposed a line-based registration method, where the matching of 3D line features is constrained by angle and distance [16]. Cheng Liang presented a hierarchical registration method based on 3D road networks and building outlines [17]. Lee extracted line features by utilizing the intersection points of adjacent planes and adjusted the differences between overlapping data using these line features [18]. Surface features contain more information compared to point or line features and are less affected by noise. Researchers generally use methods such as least squares, random sample consensus (RANSAC), and principal component analysis (PCA) for surface fitting. The minimum sum of squared Euclidean distances between surfaces is taken for the objective function [19].
One of the most classical methods for fine registration is the Iterative Closest Point (ICP) algorithm [20]. Through iterative optimization, the ICP algorithm aims to align the positions of two sets of point clouds as closely as possible. K. AL-Durgham combined the RANSAC method with the SIFT operator, effectively addressing the registration problem without local features [21]. Eijiro proposed the Normal Distribution Transform (NDT) method [22], which converts point clouds in a 3D grid into probability distribution functions. The probability distribution of each position measurement sample in the grid follows a normal distribution. By optimizing the normal distribution probabilities of two point clouds using the Hessian matrix method, fine registration is achieved. These methods assume that one point set is a subset of the other. When this assumption is invalid, it leads to false matches [23].
In recent years, the success of deep learning in advanced visual tasks has extended to the domain of point cloud processing. PointNet [24] and PointNet++ [25] represent two significant milestones. PointNet generates a descriptor for each point, while PointNet++ is a key technology for extracting local information from point clouds. The crucial stage involves the set abstraction module, composed of sampling, grouping, and PointNet components. Subsequently, numerous researchers have adopted learning-based techniques [26,27,28,29] for point cloud registration. The objective of these techniques is to extract features from 3D points and find accurate corresponding points, followed by the estimation of transformations using these corresponding points.
The aforementioned methods are widely applied in the registration of laser point clouds. However, for array InSAR point clouds, it is a challenge to extract matching features from the 3D information of point clouds. Dr. Zhu proposed an approach to extract the L-shaped structures of buildings in tomographic SAR point clouds and achieve automatic registration of point clouds from different scans [30]. Dr. Tong from Tongji University proposed a method that utilizes the constraint of parallel building facades to match specific pairs of building facades [31]. However, the bottom scenes of buildings have holes due to occlusion, and there is a large amount of noise below the building facades due to third-order scattering [32]. The fitted building facades exhibit large errors. Additionally, the stretching phenomenon within the ground range of the point clouds has not been taken into account. To address these challenges, this paper proposes a novel method for the registration of array InSAR point clouds.
In this study, we first correct the flattened phase error caused by the differences in local incidence angles. For point clouds of large urban scenes, the ground range can be several hundred meters or more, and the flattened phase error caused by the differences in local incidence angles cannot be ignored. The height variation of the ground points is relatively flat, which allows us to easily calculate the relationship between point cloud height and ground range and correct the flattened phase error. Next, we project the corrected point cloud onto the x–y plane and divide the plane into grids, which serve as pixels for generating grayscale images. The pixel intensity is represented by the average height of the points falling within each grid. The quality of the generated images is subpar, and utilizing traditional image-matching methods makes it challenging to attain the transformation relationship between the two images. We utilize the KAZE [33] algorithm to extract feature points from both the original and blurred images. The stable feature point refers to a feature point in the original image for which there exists a feature point in the blurred image that is sufficiently close to it. Next, we filter matching point pairs from the stable feature points in the images. The transformation relationship between point clouds in the azimuth and ground range directions is calculated based on the positional relationship of the matching points. The height offset between point clouds is represented by the average intensity difference of the matching points. In summary, this method makes two main contributions:
  • An analysis was conducted on the height errors in airborne array InSAR point clouds caused by local incidence angle variations, followed by their subsequent correction.
  • The KAZE algorithm was introduced into the point cloud registration problem, and a method for selecting robust feature points was proposed to address the registration of array InSAR point clouds.

2. Methods

The main challenge in effectively fusing array InSAR point clouds lies in the inability to extract stable feature points and determine true corresponding matching points in the 3D information. The proposed workflow for point cloud registration is shown in Figure 3. Firstly, the flattened phase error caused by local incidence angle differences is corrected. Then, the point cloud is projected onto the ground to generate a grayscale image, where the pixel intensity represents the average height of the points within the pixel. To obtain stable feature points, the KAZE algorithm is employed to extract feature points from both the grayscale image and the image with applied defocus blur, and a distance threshold is set to select stable feature points. Subsequently, the nearest neighbor distance ratio (NNDR) strategy and vector consistency are employed to determine the matching points between the two images. The position of the matching points is used to determine the transformation relationship in the azimuth and ground range directions of the two flight test point clouds. The pixel intensity of the matching points is utilized to determine the height offset between the two point clouds.

2.1. Flattened Phase Error Correction

The multi-channel images of the airborne array InSAR are obtained simultaneously, so there is no temporal decoherence factor, and it is only sensitive to the target elevation. The phase component of airborne array InSAR is composed of flat earth effect, height, and system noise [12]. Figure 4a illustrates the geometric configuration of radar interferometry in relation to the flattened phase. In the process of interferometry, a reference object is essential to mitigate the impact of the flat earth effect. Due to the nature of radar imaging, it becomes challenging to distinguish scatterers that are equidistant from the radar. Thus, considering a point p with a relative height of h , an equivalent point r is specified to calculate the flattened phase, local incident angle, and perpendicular baseline. Then h can be defined as follows:
h = R cos θ r R cos θ p
where R is the slant range and θ p is the local incident angle. θ r is the equivalent incident angle for calculating flat earth effect.
In fact, the actual local incident angle θ p cannot be obtained. In conventional processing, the local incident angle is substituted for the equivalent incident angle θ r on the reference body. At this time, h can be expressed as
h = λ R sin θ r 4 π B cos ( θ r α ) ( ϕ p ϕ r )
where ϕ p and ϕ r are the interference phases of point p and point r , respectively, and B denotes the length of baseline. The following can be obtained through the combined calculation of Equations (1) and (2):
Δ h = R cos θ r R cos θ p λ R sin θ r 4 π B cos ( θ r α ) ( ϕ p ϕ r )
And according to the geometric relationship shown in Figure 4a, ϕ p and ϕ r can be represented as
ϕ p = 4 π ( R B 2 + R 2 2 B R sin ( θ p α ) ) λ
ϕ r = 4 π ( R B 2 + R 2 2 B R sin ( θ r α ) ) λ
The ground range position of the point p is y , according to geometric relationships, y can be represented by R and ϕ p .
y = R · sin θ p
By substituting Equations (4)–(6) into Equation (3), setting the baseline inclination angle α to 0, the relationship between Δ h and y can be obtained as follows:
Δ h = h H h 2 + y 2 · B 2 2 B h 2 2 H h + y 2 + H h 2 + y 2 B 2 2 B y + H h 2 + y 2 · h 2 2 H h + y 2 H
The Δ h is related to the radar platform height H , target height h , and local incidence angle θ p (corresponding to y ). For a point cloud generated from a single flight, the height of the radar platform remains constant, thereby exerting an equal influence on the measurement errors. The error impact caused by the height factor of the same target is equivalent between two flights. Hence, we solely consider the influence of the local incidence angle on height errors and aim to establish the relationship between height errors and ground range positions. Based on Equation (3) and the simulation parameters from Table 1, the relationship between height error and ground range position is simulated, as shown in Figure 4b.
Expanding Equation (7) in a Taylor series, where the series is finite, and the highest power term is a 5th-order term. The magnitudes of the third, fourth, and fifth-order terms are 10 8 , 10 11 and 10 14 , respectively. In this paper, we can neglect terms of the third order and higher. In response to the Δ h , we assume that the urban terrain is a flat plane. The plan is to extract the ground portion and fit a quadratic function to model the relationship between height and ground distance. According to the analysis in [34], among the various filtering algorithms, morphology-based filters have demonstrated the best performance in extracting the ground in urban scenes. Morphology-based filters primarily rely on two fundamental operations: dilation and erosion. These operations, in combination, give rise to opening and closing operations, which are employed for point cloud filtering. The method rasterizes the original point cloud based on the lowest points within a given window size and subsequently processes it using an opening operation. Points for which the height difference before and after the operation is less than a specified tolerance are labeled as ground points. It is evident that the performance of this filtering technique is greatly influenced by the choice of window size, making it challenging to strike a balance between removing large-sized objects and retaining detailed ground features. As shown in Figure 5, the progressive morphological filters proposed in [35,36] address this issue by gradually increasing the window size and height threshold.
In this paper, the ground extraction is performed using the simple morphological filter (SMRF) proposed in [35]. Subsequently, we divide the ground range from the original point cloud into sub-intervals, project the ground points onto each sub-interval, and calculate the average height of the points within each sub-interval. The RANSAC method is then employed to fit a quadratic function that models the relationship between the average height and the position of the ground range. For a single flight-acquired point cloud, using the center position along the ground distance axis as a reference, we calculate the required upward or downward adjustment in height for each point based on its distance from the center along the ground distance axis and its relationship with the fitted quadratic curve. This allows us to correct the overall height of the point cloud, ensuring that each point is adjusted appropriately to align with the desired height. The flowchart of point cloud height correction is shown in Figure 6.

2.2. Obtain Matching Points with KAZE

2.2.1. Generate Grayscale Image

In this study, the point cloud is projected onto the xy plane, where the x-axis represents the azimuth direction and the y-axis represents the ground range direction. A 2D matrix is created by dividing the xy plane into grids with a specified step size along the x and y axes, with a step size of 0.8 m. The average height of the points that fall within each grid cell is computed and assigned as the corresponding element of the matrix.

2.2.2. Feature Point Extraction

Traditional feature detection methods employ Gaussian linear scale-space downsampling to detect feature points. Visually, the matching points between two images are typically found along the edges and certain details within the scene. However, Gaussian filtering can cause edge blurring and loss of fine details. As a result, using linear scale-space feature detection algorithms for image registration in this study yielded unsatisfactory results. The KAZE algorithm uses nonlinear diffusion filtering to construct a scale space, which effectively reduces image edge blur and detail loss [33]. It retains higher local accuracy and distinguishability while maintaining scale invariance. The KAZE algorithm mainly includes the following steps:
  • Constructing Nonlinear Scale Space:
The KAZE algorithm constructs a nonlinear scale space through the utilization of nonlinear diffusion filtering and the Additive Operator Splitting (AOS) algorithm. The nonlinear diffusion filtering method interprets the variations in image brightness at different scales as the divergence of a certain form of flow function, which can be described by nonlinear partial differential equations:
L t = d i v ( c ( x , y , t ) · L )
where L represents the image brightness, c denotes the conductivity function, and t represents the scale parameter. The conductivity function determines the extent to which the diffusion process in an image adapts to its local structure. The expression for c is as follows:
c ( x , y , t ) = g ( | L σ ( x , y , t ) | )
where L σ is the gradient of the image after Gaussian smoothing. In this study, we adopt the g 2 function as proposed in [33].
g 2 = 1 1 + L σ 2 k 2
Due to the lack of specific analytical solutions for the partial differential equation of nonlinear diffusion filtering, numerical methods are required to estimate the solution of the differential equation. The linear implicit scheme is a feasible discretization method, and the equation is as follows:
L i + 1 L i τ = l = 1 m A l ( L i ) L i + 1
A l represents the matrix representation of the image in different dimensions. The solution L i + 1 of the equation is represented as follows:
L i + 1 = ( I τ l = 1 m A l ( L i ) ) 1 L i
The aforementioned steps constitute the fundamental construction scheme for nonlinear scale space.
2.
Feature point detection:
Since nonlinear diffusion filtering is based on the theory of heat conduction, its model is formulated in terms of time units. Therefore, it is necessary to perform a conversion between image pixel units and time units. This conversion can be represented by Equation (13), where t i is referred to as the evolution time.
t i = 1 2 σ i 2
Using the AOS scheme, the nonlinear scale space can be represented as follows:
L i + 1 = ( I ( t i + 1 t i ) l = 1 m A l ( L i ) ) 1 L i
The feature point detection in KAZE is achieved by searching for local maxima using the Hessian matrix:
L H e s s i a n = σ 2 ( L x x L y y L x y 2 )
Each pixel is compared with the pixels in a 3 × 3 neighborhood window at its current scale as well as the scales above and below. If the pixel value is greater than all the pixels in the neighborhood window, it is considered a feature point. Subpixel-level localization of feature points is achieved by employing a Taylor expansion in the scale space.
3.
Feature descriptor:
For feature points with a scale parameter of σ i , a window of size 24 σ i × 24 σ i is taken on the gradient image, centered at the feature point. The window is divided into a grid of 4 × 4 sub-scenes, each with a size of 9 σ i × 9 σ i . Adjacent sub-scenes have an overlap strip of width 2 σ i . Each sub-scene is weighted using a Gaussian kernel with a standard deviation of σ 1 = 2.5 σ i . A sub-scene descriptor vector of length 4 is computed for each sub-scene. These sub-scene descriptors are then weighted using another Gaussian window of size 4 × 4 with a standard deviation of σ 2 = 1.5 σ i . Finally, the descriptors are normalized to obtain a 64-dimensional descriptor vector.

2.2.3. Feature Matching Method

Traditional feature point matching algorithms typically compute the Euclidean distance between feature vectors and utilize the NNDR strategy to determine whether two feature points are a match. After applying the NNDR, RANSAC methods are often employed to determine the final set of matched point pairs.
The generated images from the point cloud exhibit a significant number of unstructured holes with an unordered distribution. The application of the KAZE algorithm leads to the detection of numerous unstable feature points, and many of these feature points have very similar descriptors. Increasing the threshold in the NNDR algorithm does not yield better matching results; instead, it may even result in the elimination of correctly matched point pairs.
This study proposes the construction of a circular scene mean filter to perform filtering on the original image. The filtering process aims to eliminate small holes present in the original image while also resulting in increased blurring along the image boundaries. Subsequently, the KAZE is employed to detect feature points separately in both the original and filtered images. For a particular feature point p = ( x , y ) in the original image, if there exists a feature point q = ( x , y ) in the filtered image and it satisfies condition p q ε , the point p is considered a stable feature point, where the size of ε is one pixel length. Subsequently, we utilize the NNDR to find matching point pairs. In this study, there is no rotation transformation between the two images. Only displacements exist in the azimuthal and ground range directions, with a certain level of scaling in the ground range direction. To further eliminate false matching points, the angle between the spatial vector of the matched point pairs and the horizontal vector is computed. After applying the NNDR, let us denote the set of feature points in the target image as A = { a 1 , a 2 , , a n } , with individual points represented as a n = ( x n , y n ) , and the set of feature points in the registration image as B = { b 1 , b 2 , , b n } , with individual points represented as b n = ( w n , k n ) . We can calculate the angle between the distance vector and the horizontal vector (1,0).
θ n = arccos ( w n x n ( w n x n ) 2 + ( k n y n ) 2 )
The probability distribution of θ n is depicted in Figure 7. It can be observed that after NNDR, θ n is concentrated around a prominent peak, which exhibits a triangular shape. To eliminate matching point pairs that deviate from the main peak, a threshold is set. The purpose of this threshold is to select matching point pairs that satisfy the condition of θ being within the triangular peak. The threshold is determined as follows:
δ θ = 1 max ( P D F θ )

2.3. Calculate 3D Transformations

The airborne array interferometric SAR system incorporates a high-precision position and orientation system (POS), consequently yielding minimal errors in the azimuthal direction between the point clouds obtained from two consecutive flights. In the ground range direction, apart from a certain displacement, there was also scaling. In the vertical direction, after the flattened phase error correction, only displacement was evident. The positions of the feature points in the image correspond to the coordinates in the azimuth and ground range directions of the point cloud, while the intensity of the feature point pixels corresponds to the average height of the point cloud. We computed the angular deviation between matching points and performed a statistical analysis to examine the probability distribution of these deviations, as illustrated in Figure 8a. We employed a quantile-quantile (Q-Q) plot to assess the adherence of this dataset to a Gaussian distribution, aiming to determine its normality.
Within the Q-Q plot, a significant number of points align along a straight line while demonstrating some curvature on the tails. This curvature phenomenon can be attributed to the existence of upper and lower limits in the actual data. Therefore, the deviation of azimuthal orientations between point clouds can be confirmed as the offset corresponding to the maximum probability density.
The directional offsets in ground distance and height corresponding to the matching points of the two images are depicted in Figure 9a,b, respectively. In Figure 9a, the abscissa represents the ground distance coordinates corresponding to the matching points in the source image. By fitting these coordinates into a straight line using the least squares method, the slope of the red line reflects the stretching effect in the ground distance direction between the two acquired point clouds. For the source point cloud, the offset value is determined based on the relationship between the ground distance coordinate of each point and the fitted line. The intensity differences data between matched points of the two images is divided into four segments. The value of 0 is observed when no point cloud falls into the matched pixel in either of the two images. The outliers in the upper and lower sections are caused by pixel intensities of 0 in only one of the two matched images. In this study, we only consider the real values from the middle section, where the height offset fluctuations within ±2 m, as shown in Figure 9b. The average value of these points is taken as the offset in the height direction between point clouds.

3. Results

3.1. Experimental Data

To conduct experimental validation in this study, we utilized point cloud data obtained from actual flight tests. These flight tests were conducted in Sichuan Province in 2022. The radar images are presented in Figure 10a and Figure 10b, respectively. The flight-related parameters are listed in Table 2, where S a represents azimuth resolution, S r represents range resolution, and S h represents elevation resolution.
The area of experimental scene 1 is 0.22 square kilometers, with a ground range length of 0.31 km. The area of experimental scene 2 is 0.72 square kilometers, with a ground range length of 0.83 km. Scene 2 has a larger area with more diverse elements, including clear roads, bridges, and riverbanks. This contributes to the registration task for the image. Scene 1, on the other hand, has a smaller area, with only a prominent road on the left side. The generated images in this scene exhibit a simpler composition, primarily aimed at verifying the applicability of the proposed method. Point clouds of both scenes depict urban scenes adhering to the assumption of a level ground surface, as posited in this study.

3.2. Evaluation Criterion

Traditional evaluation metrics for point cloud registration methods include Root Mean Square Error (RMSE), mutual information, entropy, and point cloud overlap. RMSE measures the distance difference between point pairs in point clouds. It is calculated by computing the distances between corresponding points in the point clouds, taking the square of each distance, averaging them, and then taking the square root to obtain the RMSE value. Mutual information is calculated to assess the similarity between two point clouds. Entropy is used to measure the uncertainty of point distribution within a point cloud and can evaluate the consistency of its structure. Point cloud overlap evaluates the registration quality by calculating the proportion of the overlap scene between two point clouds.
Due to the low overlap between the SAR point clouds obtained from two flight experiments, these metrics cannot directly evaluate the effectiveness of SAR point cloud registration. Therefore, we adopt the metrics proposed by [31]. Ref. [31] utilizes the constraint of parallel relative facades of the same building to extract the building facades from the fused point cloud. For effective registration methods, the directions of the two relative facades should be parallel. Hence, as illustrated in Figure 11, ref. [31] calculate the angular difference θ between the two normal vectors of each facade pair.
The evaluation metrics proposed in [31] only capture the effectiveness of building point registration. In this study, we manually selected certain road point clouds and considered them to be overlapping between two point clouds. The RMSE between these road point clouds was computed as an evaluation metric. The definition of RMSE is as follows:
RMSE = i = 1 n x i y i 2 n
where X and Y represent two point clouds, N represents the number of corresponding points, x i is the i t h point in X , and y i is the corresponding point in Y for x i ,
The locations of the road are illustrated in the red box in Figure 12. On the other hand, we adopted the correntropy proposed in [37] as an additional evaluation metric. Correntropy effectively alleviates the impact of outliers and noise, and its definition is as follows:
V ( X , Y ) = 1 N i = 1 N exp ( x i y i 2 2 σ 2 )
The definition of parameters is the same as Formula (18) and σ takes the value of 1 in this paper. A larger correntropy indicates a better registration performance.

3.3. Experimental Results

To validate the claimed superiority, in this subsection, we apply our proposed method alongside the approach outlined in [31] and the classical ICP algorithm to the point cloud fusion task of two distinct scenes. In scene one, the two-point clouds consist of 1,672,216 and 1,682,162 points, respectively. After applying simple morphological filtering and outlier removal using the RANSAC method, the ground points for scene 1 were obtained. The relationship between the average height of ground points and ground distance for the two corresponding point clouds in scene 1 is shown in Figure 13.
After height calibration of the point clouds, a two-dimensional image was generated using the method mentioned in Section 2.2.1. The KAZE algorithm was employed to extract key points from the image, and matching points were obtained using our proposed method, as illustrated in Figure 14.
In scene 1, the area is relatively small, with the majority of the scene being comprised of buildings. The left side of Figure 14 corresponds to a small area in Figure 10b. Due to occlusion caused by buildings, there are significant shadows present near the riverbank adjacent to the building area. As a result, the majority of the matching points are concentrated in the road area above the image and in the vicinity of the bridges spanning the river.
The results of point cloud registration are depicted in Figure 15. The three fused results demonstrate the extraction of building facades using the density threshold filtering method. Excluding no corresponding facades, there are 22 pairs of building facades corresponding to each other.
The ICP algorithm tends to maximize the alignment of two-point clouds. From the extracted building facades, it can be observed that the two opposing building facades almost completely overlap. The algorithm is essentially ineffective in the task of SAR point cloud fusion. The method in [31] first extracts the building facade and calculates the transformation relationship from the source point cloud to the target point cloud using the constraint of two opposite facades of the same building being parallel to each other. Visually, there is no significant difference between the two methods for extracting building point clouds after registration. Table 3 presents the quantified results.
From the third quantitative indicator in Table 3, it seems that our method does not have superiority over the method proposed in [31]. However, the effect of SAR point cloud registration should not be solely considered from the constraints of extracting parallel building facades. The algorithm in [31] minimizes the angle between the planes fitted by the building facade point cloud and the correspondence between the center points, naturally resulting in better indicators. In the Euclidean distance metric of point-to-point, our method is significantly superior to the method proposed in [31]. As shown in Figure 16, our proposed approach exhibits superior accuracy. In our approach, 81.91% of the nearest neighbor distances fall within the range of 0 to 5 m, whereas the corresponding value for the comparative method is 76.05%.
For scene 2, the number of points obtained during the two flights is 3,536,789 and 4,554,655, respectively. Some results of using our method to process the point cloud in scene 2 have been shown in the second part. Figure 17 shows the matching point pairs.
Scene 2 has a large area and rich contents, and more matching point pairs were obtained using the KAZE algorithm compared to scene 1. When integrating the point clouds of scene 2 using the algorithm proposed in [31], there was a significant difference in the extracted sets of building facade point clouds from the two point clouds. In some cases, only one of the point clouds captured the facade corresponding to the same building, and there were substantial disparities in the relative facades of most buildings. The coarse registration method employed in [31] faced challenges in determining which facades corresponded to each other. Consequently, we manually selected several facades with better extraction results and used the algorithm in [31] to fuse them in order to compare the registration performance of our proposed algorithm against that of [31].
The registration results are depicted in Figure 18. Observably, the results of the fusion using the ICP algorithm display misaligned architectural structures, with considerable fusion errors evident in ground-level roads. The method proposed in [31] exhibits poor fusion results for the ground above the scene, where the two point clouds fail to align adequately. Conversely, our proposed method demonstrates superior fusion outcomes for both ground and architectural points in the SAR point cloud.
For a more detailed analysis, we employed a density threshold method to extract the buildings within the scene, as depicted in Figure 19. The results indicate that concerning the reconstruction of architectural structures, there is no significant difference between our proposed method and the approach outlined in [31]. However, the ICP algorithm merely aligns the two point clouds without adequately reconstructing the architectural elements. Table 4 presents the quantified results.
As shown in Figure 20, compared to scene 1, the application of our approach in scene 2 demonstrates a more pronounced advantage. In our approach, 69.14% of the nearest neighbor distances fall within the range of 0 to 5 m, whereas the corresponding value for the comparative method is 44.71%.

3.4. Time Performance

To evaluate the efficiency of our proposed method, we computed the time cost for each registration process. Our method was implemented using MATLAB 2021, and all experiments were conducted on a computer with an AMD R7 5800H processor and 16 GB of memory. The ICP algorithm requires multiple iterations to select the closest points between two point clouds and calculate the transformation relationship. As the number of points in the point cloud increases, the computation time also increases. The method proposed in [31] involves extracting building facades from the point cloud and fitting these facades to generate corresponding parameters. The processing time is related to the number of buildings in the scene. However, in large-scale scenarios, significant differences might exist between the extracted building facades from two-point clouds. This dissimilarity sometimes prevents the automatic determination of which facades belong to the same building, thereby limiting its application. The time required by our proposed method is primarily dependent on the size of the scene. Our proposed method demonstrates clear advantages in terms of efficiency.

4. Discussion

In this study, a proposed approach is presented to address the point cloud registration problem of array InSAR, which contains a large number of noisy points and exhibits significant errors. The approach involves utilizing image registration methods to achieve point cloud registration. The analysis focuses on the height errors along the ground range direction in a single flight experiment and the scale variations in the ground range direction between two consecutive flight experiments. Unlike traditional point cloud registration tasks that compute a rotation matrix and translation vector as transformation parameters, the registration of array InSAR point clouds primarily involves error correction and computation of the displacement between the two point clouds.
The urban scenes under consideration predominantly consist of building areas, but they also contain several features that are beneficial for point cloud registration tasks, such as road lines, bridges, and structured artificial facilities. In a specific scenario, referred to as scene 1, with a relatively small area, there are only noticeable common features on the left side of the two point clouds. Although the number of computed matching point pairs is limited, it does not affect the accuracy of point cloud registration, as these matching points can be considered true correspondences.
To achieve high-precision point cloud registration, this study relies on subpixel-level accurate image registration algorithms to calculate the offsets between the azimuth and ground range directions of the point clouds. Additionally, the study reveals that the majority of image-matching points are concentrated in the unobstructed ground scenes. The building facade points directly beneath contain a significant amount of clutter caused by triple scattering. Additionally, due to interference from high-angle sidelobes, the unstructured ground scene also presents some artifacts in the vertical dimension. By utilizing the average height of the point cloud to represent the pixel intensity of the image and using the pixel intensity difference of the matching points as the offset in the height direction, the registration accuracy in the height direction can be ensured to be lower than the height resolution of the array InSAR point cloud.
In contrast to previous work, which innovatively utilized the angles between the extracted normal vectors of building facades and the distances from the facade centers to the extended normal vectors of opposing facades as evaluation metrics, this study found that accurately extracting building facades from array InSAR point clouds is challenging. The simple application of density threshold filtering methods tends to filter out low-rise buildings, and some extracted facades are incomplete, resulting in significant differences between opposing facades of the same building and making it difficult to fit the facades correctly. Moreover, the presence of clutter generated by triple scattering at the bottom of the buildings hinders the accurate correspondence of the fitted facade center heights. As for the classic ICP algorithm, it is entirely unsuitable for SAR point cloud registration tasks because the two point clouds lack matching points. The approach of manually annotating control points and using the Euclidean distances between them as evaluation metrics also has limitations, as the true correspondences of the manually annotated points cannot be determined. Therefore, for the registration task of array InSAR point clouds, it is necessary to define more comprehensive metrics to evaluate the accuracy of building facade extraction and point cloud registration.

5. Conclusions

This paper proposes an automatic image-based registration method for array InSAR point cloud registration. It analyzes the height errors present in array InSAR point clouds and describes the entire process of point cloud registration.
According to the InSAR system model, an analysis of the relationship between the height errors in point clouds and their ground range positions is conducted. Initially, the SMRF algorithm is employed to extract the ground portion of the point cloud, which is utilized for fitting the relationship between height errors and ground range. Subsequently, the height-corrected point clouds are projected onto the azimuth-ground range plane to generate images, where the pixel intensity is represented by the average height of all points falling within the pixel. Finally, the KAZE algorithm, along with an angular threshold, is employed to extract matching points between two images. The transformation relationship between the two point clouds is then calculated based on the positions and intensity differences of the matching points.
Previous research on array InSAR point cloud registration is limited, and this paper primarily compares the proposed method with the approach presented in [31]. Experimental results using real data demonstrate the high robustness of the proposed method in two different scenarios. For the architectural elements within the scene, the average angular difference between their respective facades is less than 0.5°. As for the ground portions within the scene, the RMSE after registration is less than 1.5 m. These results are considered acceptable for SAR point clouds. Compared to previous methods that extract and fuse building facades, our approach addresses point cloud registration from the perspective of image registration. It involves fewer steps, is more efficient, and consumes only 14% of the time required by the method proposed in [31].
In future work, for array InSAR point cloud registration, we consider utilizing deep learning methods after obtaining a large dataset to achieve the task of point cloud registration.

Author Contributions

Conceptualization, C.C. and F.Z.; methodology, C.C.; software, Y.L.; validation, C.C. and M.S.; formal analysis, C.C.; investigation, C.C.; resources, F.Z.; data curation, Y.L.; writing—original draft preparation, C.C.; writing—review and editing, F.Z.; visualization, W.L.; supervision, Z.L.; project administration, L.C.; funding acquisition, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, X.; Mizukami, Y.; Tada, M.; Matsuno, F. Navigation of a mobile robot in a dynamic environment using a point cloud map. Artif. Life Robot. 2021, 26, 10–20. [Google Scholar] [CrossRef]
  2. Chen, S.; Liu, B.; Feng, C.; Vallespi-Gonzalez, C.; Wellington, C. 3d point cloud processing and learning for autonomous driving: Impacting map creation, localization, and perception. IEEE Signal Process. Mag. 2020, 38, 68–86. [Google Scholar] [CrossRef]
  3. Fuhrmann, S.; Langguth, F.; Goesele, M. Mve-a multi-view reconstruction environment. GCH 2014, 3, 4. [Google Scholar]
  4. Blanc, T.; El Beheiry, M.; Caporal, C.; Masson, J.-B.; Hajj, B. Genuage: Visualize and analyze multidimensional single-molecule point cloud data in virtual reality. Nat. Methods 2020, 17, 1100–1102. [Google Scholar] [CrossRef]
  5. Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Wang, Y.; Dai, W.; Fan, H.; Hyyppä, J. Registration of large-scale terrestrial laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
  6. Pu, S.; Vosselman, G. Knowledge based reconstruction of building models from terrestrial laser scanning data. ISPRS J. Photogramm. Remote Sens. 2009, 64, 575–584. [Google Scholar] [CrossRef]
  7. Barazzetti, L.; Scaioni, M.; Remondino, F. Orientation and 3D modelling from markerless terrestrial images: Combining accuracy with automation. Photogramm. Rec. 2010, 25, 356–381. [Google Scholar] [CrossRef]
  8. Simon, L.; Teboul, O.; Koutsourakis, P.; Van Gool, L.; Paragios, N. Parameter-free/pareto-driven procedural 3d reconstruction of buildings from ground-level sequences. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 518–525. [Google Scholar]
  9. Zhu, X.X.; Bamler, R. Tomographic SAR inversion by L1-norm regularization—The compressive sensing approach. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3839–3846. [Google Scholar] [CrossRef]
  10. Zeng, Z.; Sun, J.; Han, Z.; Hong, W. SAR automatic target recognition method based on multi-stream complex-valued networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  11. Gernhardt, S.; Cong, X.; Eineder, M.; Hinz, S.; Bamler, R. Geometrical fusion of multitrack PS point clouds. IEEE Geosci. Remote Sens. Lett. 2011, 9, 38–42. [Google Scholar] [CrossRef]
  12. Hu, F.; Wang, F.; Ren, Y.; Xu, F.; Qiu, X.; Ding, C.; Jin, Y. Error analysis and 3D reconstruction using airborne array InSAR images. ISPRS J. Photogramm. Remote Sens. 2022, 190, 113–128. [Google Scholar] [CrossRef]
  13. Ge, X.; Hu, H.; Wu, B. Image-guided registration of unordered terrestrial laser scanning point clouds for urban scenes. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9264–9276. [Google Scholar] [CrossRef]
  14. Barnea, S.; Filin, S. Registration of terrestrial laser scans via image based features. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 32–37. [Google Scholar]
  15. Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-points congruent sets for robust pairwise surface registration. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  16. Jaw, J.J.; Chuang, T.Y. Registration of ground-based LiDAR point clouds by means of 3D line features. J. Chin. Inst. Eng. 2008, 31, 1031–1045. [Google Scholar] [CrossRef]
  17. Cheng, L.; Wu, Y.; Tong, L.; Chen, Y.; Li, M. Hierarchical registration method for airborne and vehicle lidar point cloud. Remote Sens. 2015, 7, 13921–13944. [Google Scholar] [CrossRef]
  18. Lee, J.; Yu, K.; Kim, Y.; Habib, A.F. Adjustment of discrepancies between LIDAR data strips using linear features. IEEE Geosci. Remote Sens. Lett. 2007, 4, 475–479. [Google Scholar] [CrossRef]
  19. Gruen, A.; Akca, D. Least squares 3D surface and curve matching. ISPRS J. Photogramm. Remote Sens. 2005, 59, 151–174. [Google Scholar] [CrossRef]
  20. Besl, P.J.; McKay, N.D. Method for registration of 3-D shapes. In Proceedings of the Sensor Fusion IV: Control Paradigms and Data Structures, Boston, MA, USA, 12–15 November 1992; pp. 586–606. [Google Scholar]
  21. Al-Durgham, K.; Habib, A.; Kwak, E. RANSAC approach for automated registration of terrestrial laser scans using linear features. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci 2013, 2, 13–18. [Google Scholar] [CrossRef]
  22. Takeuchi, E.; Tsubouchi, T. A 3-D scan matching using improved 3-D normal distributions transform for mobile robotic mapping. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–13 October 2006; pp. 3068–3073. [Google Scholar]
  23. Fusiello, A.; Castellani, U.; Ronchetti, L.; Murino, V. Model acquisition by registration of multiple acoustic range views. In Proceedings of the Computer Vision—ECCV 2002: 7th European Conference on Computer Vision, Copenhagen, Denmark, 28–31 May 2002; pp. 805–819. [Google Scholar]
  24. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
  25. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv 2017, arXiv:1706.02413. [Google Scholar]
  26. Deng, H.; Birdal, T.; Ilic, S. 3D local features for direct pairwise registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3244–3253. [Google Scholar]
  27. Yang, J.; Zhao, C.; Xian, K.; Zhu, A.; Cao, Z. Learning to fuse local geometric features for 3D rigid data matching. Inf. Fusion 2020, 61, 24–35. [Google Scholar] [CrossRef]
  28. Valsesia, D.; Fracastoro, G.; Magli, E. Learning localized representations of point clouds with graph-convolutional generative adversarial networks. IEEE Trans. Multimed. 2020, 23, 402–414. [Google Scholar] [CrossRef]
  29. Huang, X.; Mei, G.; Zhang, J. Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11366–11374. [Google Scholar]
  30. Wang, Y.; Zhu, X.X. Automatic feature-based geometric fusion of multiview TomoSAR point clouds in urban area. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 953–965. [Google Scholar] [CrossRef]
  31. Tong, X.; Zhang, X.; Liu, S.; Ye, Z.; Feng, Y.; Xie, H.; Chen, L.; Zhang, F.; Han, J.; Jin, Y. Automatic Registration of Very Low Overlapping Array InSAR Point Clouds in Urban Scenes. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–25. [Google Scholar] [CrossRef]
  32. Cheng, R.; Liang, X.; Zhang, F.; Guo, Q.; Chen, L. Multiple-bounce scattering of Tomo-SAR in single-pass mode for building reconstructions. IEEE Access 2019, 7, 124341–124350. [Google Scholar] [CrossRef]
  33. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE features. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 214–227. [Google Scholar]
  34. Chen, C.; Guo, J.; Wu, H.; Li, Y.; Shi, B. Performance comparison of filtering algorithms for high-density airborne LiDAR point clouds over complex LandScapes. Remote Sens. 2021, 13, 2663. [Google Scholar] [CrossRef]
  35. Pingel, T.J.; Clarke, K.C.; McBride, W.A. An improved simple morphological filter for the terrain classification of airborne LIDAR data. ISPRS J. Photogramm. Remote Sens. 2013, 77, 21–30. [Google Scholar] [CrossRef]
  36. Zhang, K.; Chen, S.-C.; Whitman, D.; Shyu, M.-L.; Yan, J.; Zhang, C. A progressive morphological filter for removing nonground measurements from airborne LIDAR data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 872–882. [Google Scholar] [CrossRef]
  37. Zhang, X.; Jian, L.; Xu, M. Robust 3D point cloud registration based on bidirectional Maximum Correntropy Criterion. PLoS ONE 2018, 13, e0197542. [Google Scholar] [CrossRef]
Figure 1. Airborne array InSAR acquires complete 3D information of an urban area through two flight tests.
Figure 1. Airborne array InSAR acquires complete 3D information of an urban area through two flight tests.
Remotesensing 16 00601 g001
Figure 2. Capture geometry of the two tracks.
Figure 2. Capture geometry of the two tracks.
Remotesensing 16 00601 g002
Figure 3. Algorithm flow.
Figure 3. Algorithm flow.
Remotesensing 16 00601 g003
Figure 4. Calculate target height using local incidence angle of reference point. (a) InSAR system geometric model. (b) Simulation of the relationship between the height error and ground range position.
Figure 4. Calculate target height using local incidence angle of reference point. (a) InSAR system geometric model. (b) Simulation of the relationship between the height error and ground range position.
Remotesensing 16 00601 g004
Figure 5. SMRF flow.
Figure 5. SMRF flow.
Remotesensing 16 00601 g005
Figure 6. The flowchart of point cloud height correction.
Figure 6. The flowchart of point cloud height correction.
Remotesensing 16 00601 g006
Figure 7. The probability distribution of θ n .
Figure 7. The probability distribution of θ n .
Remotesensing 16 00601 g007
Figure 8. Statistics of azimuth offset between matching points. (a) Probability distribution of offset; (b) Q-Q plot (sample data-standard normal).
Figure 8. Statistics of azimuth offset between matching points. (a) Probability distribution of offset; (b) Q-Q plot (sample data-standard normal).
Remotesensing 16 00601 g008
Figure 9. The offsets in the ground distance direction and height direction between the matching points. (a) Offset of ground distance; (b) Height offset.
Figure 9. The offsets in the ground distance direction and height direction between the matching points. (a) Offset of ground distance; (b) Height offset.
Remotesensing 16 00601 g009
Figure 10. Intensity SAR images: (a) The SAR image obtained from the first flight (platform moving from west to east); (b) Point clouds of the two scenes generated from the first flight; (c) The SAR image obtained from the second flight (platform moving from east to west); (d) Point clouds of the two scenes generated from the second flight.
Figure 10. Intensity SAR images: (a) The SAR image obtained from the first flight (platform moving from west to east); (b) Point clouds of the two scenes generated from the first flight; (c) The SAR image obtained from the second flight (platform moving from east to west); (d) Point clouds of the two scenes generated from the second flight.
Remotesensing 16 00601 g010aRemotesensing 16 00601 g010b
Figure 11. Evaluation index for registration results. (a) Two parallel facades; (b) The angle difference of normal vector from the source façade center to the normal extension of the target façade.
Figure 11. Evaluation index for registration results. (a) Two parallel facades; (b) The angle difference of normal vector from the source façade center to the normal extension of the target façade.
Remotesensing 16 00601 g011
Figure 12. The positions of control points.
Figure 12. The positions of control points.
Remotesensing 16 00601 g012
Figure 13. The average height of the ground as a function of the ground distance position. (a) Source point cloud; (b) Target point cloud.
Figure 13. The average height of the ground as a function of the ground distance position. (a) Source point cloud; (b) Target point cloud.
Remotesensing 16 00601 g013
Figure 14. Results of image matching algorithms.
Figure 14. Results of image matching algorithms.
Remotesensing 16 00601 g014
Figure 15. The effect of registration in scene 1 after using (a) our proposed method, (b) the method of [31], and (c) ICP.
Figure 15. The effect of registration in scene 1 after using (a) our proposed method, (b) the method of [31], and (c) ICP.
Remotesensing 16 00601 g015
Figure 16. Histogram of nearest neighbor distance of (a) our proposed approach and (b) the method of [31].
Figure 16. Histogram of nearest neighbor distance of (a) our proposed approach and (b) the method of [31].
Remotesensing 16 00601 g016
Figure 17. Results of image matching algorithms.
Figure 17. Results of image matching algorithms.
Remotesensing 16 00601 g017
Figure 18. The effect of registration in scene 2 after using (a) our proposed method, (b) the method of [31], and (c) ICP.
Figure 18. The effect of registration in scene 2 after using (a) our proposed method, (b) the method of [31], and (c) ICP.
Remotesensing 16 00601 g018
Figure 19. The registration effect on the buildings within scene 2 after using (a) our proposed method, (b) the method of [31], and (c) ICP.
Figure 19. The registration effect on the buildings within scene 2 after using (a) our proposed method, (b) the method of [31], and (c) ICP.
Remotesensing 16 00601 g019
Figure 20. Histogram of nearest neighbor distance of (a) our proposed approach and (b) the method of [31].
Figure 20. Histogram of nearest neighbor distance of (a) our proposed approach and (b) the method of [31].
Remotesensing 16 00601 g020
Table 1. Simulation parameters.
Table 1. Simulation parameters.
H/(m) λ/(cm) h/(m) B/(m) θP/(°)
40002100220–45
Table 2. Flight parameters.
Table 2. Flight parameters.
H/(m) Band α/(°) B/(m) Sa/(m) Sr/(m) Sh/(m)
4500Ku01.9860.2370.18751.357
Table 3. Evaluation of registration accuracy for scene 1.
Table 3. Evaluation of registration accuracy for scene 1.
MethodRMSE (m)CorrentropyMean θ (deg)Time (s)
ICP5.32750.10740.537274.56183
[25]4.04690.22820.4263250.6351
Proposed1.47730.26400.42923.8633
Table 4. Evaluation of registration accuracy for scene 2.
Table 4. Evaluation of registration accuracy for scene 2.
MethodRMSE (m)CorrentropyMean θ (deg)Time (s)
ICP8.3570.07250.9382453.3789
[31]5.8630.09100.3873120.3572
Proposed1.0350.22390.389216.8694
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cui, C.; Liu, Y.; Zhang, F.; Shi, M.; Chen, L.; Li, W.; Li, Z. A Novel Automatic Registration Method for Array InSAR Point Clouds in Urban Scenes. Remote Sens. 2024, 16, 601. https://doi.org/10.3390/rs16030601

AMA Style

Cui C, Liu Y, Zhang F, Shi M, Chen L, Li W, Li Z. A Novel Automatic Registration Method for Array InSAR Point Clouds in Urban Scenes. Remote Sensing. 2024; 16(3):601. https://doi.org/10.3390/rs16030601

Chicago/Turabian Style

Cui, Chenghao, Yuling Liu, Fubo Zhang, Minan Shi, Longyong Chen, Wenjie Li, and Zhenhua Li. 2024. "A Novel Automatic Registration Method for Array InSAR Point Clouds in Urban Scenes" Remote Sensing 16, no. 3: 601. https://doi.org/10.3390/rs16030601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop