ISAR Image Matching and Three-Dimensional Scattering Imaging Based on Extracted Dominant Scatterers

This paper studies inverse synthetic aperture radar (ISAR) image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers. In the condition of a long baseline between two radars, it is easy for obvious rotation, scale, distortion, and shift to occur between two-dimensional (2D) radar images. These problems lead to the difficulty of radar-image matching, which cannot be resolved by motion compensation and cross-correlation. What is more, due to the anisotropy, existing image-matching algorithms, such as scale invariant feature transform (SIFT), do not adapt to ISAR images very well. In addition, the angle between the target rotation axis and the radar line of sight (LOS) cannot be neglected. If so, the calibration result will be smaller than the real projection size. Furthermore, this angle cannot be estimated by monostatic radar. Therefore, instead of matching image by image, this paper proposes a novel ISAR imaging matching and 3D imaging based on extracted scatterers to deal with these issues. First, taking advantage of ISAR image sparsity, radar images are converted into scattering point sets. Then, a coarse scatterer matching based on the random sampling consistency algorithm (RANSAC) is performed. The scatterer height and accurate affine transformation parameters are estimated iteratively. Based on matched scatterers, information such as the angle and 3D image can be obtained. Finally, experiments based on the electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm.


Introduction
Inverse synthetic aperture radar (ISAR) imaging, especially two-dimensional (2D) imaging, has been extensively used in civil and military applications because of its capability to generate high-resolution images of noncooperative targets. There are many methods of 2D imaging, including noise suppression, sparse imaging, motion compensation, and so on [1][2][3][4][5][6][7][8][9][10][11][12]. A 2D image is the projection of a three-dimensional (3D) target on the range-Doppler (RD) plane . However, 2D imaging has several unavoidable shortcomings. For example, the azimuth dimension of the 2D image just reflects the Doppler distribution of every scattering point. What is more, most calibration algorithms achieve calibration by estimating the target rotation angle during the observation time [13,14]. These algorithms assume the target rotation axis is orthogonal to the radar line of sight (LOS). However, in practical affine transformation can be performed to achieve accurate scattering point set matching. Meanwhile, the scattering point height and the angle between the target spinning axis and LOS can be obtained. In Section 4, experiments based on two simulation datasets, using the electromagnetic simulation software CADFEKO, are performed to demonstrate the effectiveness of the proposed algorithm.

Signal Model
A radar system with two arbitrary configuration radars locating at A and B is depicted in Figure 1a, where the radar coordinate system O 0 − X 0 Y 0 Z 0 , the transition coordinate system O − X 1 Y 1 Z 1 , and the target coordinate system O − XYZ are established. For ease of description and analysis, these three reference coordinate systems are marked as S 0 , S 1 , and S, respectively. Having to deal with these three coordinate systems, we denote scattering points in 3D space by using a subscript according to the specific reference, e.g., P s 1 and P s 0 are the same point expressed with the reference system coordinates S 0 and S 1 , respectively. analysis, these three reference coordinate systems are marked as 0 S , 1 S , and S , respectively.
Having to deal with these three coordinate systems, we denote scattering points in 3D space by using a subscript according to the specific reference, e.g., 1 s P and 0 s P are the same point expressed with the reference system coordinates 0 S and 1 S , respectively.
The reference system 0 S is embedded in radar A and centered in 0 O . The 0 X -axis is aligned with the LOS of radar A. Radar B is located in the plane 0 0 ( , ) X Y . The reference system S is embedded in the target and centered in O . In practical conditions, external forces on the target produce angular motions that are represented by the angular rotation vector  , which is aligned with the Z -axis. Its projection onto the plane orthogonal to LOS of radar A defines the effective rotation component eff  , which forms the 1 Z -axis. The plane 1 1 ( , ) Y X is the imaging plane, whose axes correspond to range and the cross-range dimensions. For an easier understanding, the radar coordinate system and target rotation axis are abstractly represented in Figure 1b.  is the angle between the target rotation axis and 0 X -axis, marked as ARX, which can influence the projection and calibration results.  is the angle between the target rotation axis and the 0 Y -axis, marked as ARY. Without loss of generality, we assume that the 1 X -axis is aligned with 0 X -axis and the Y -axis is aligned with 1 Y -axis. According to right-handed coordinates, the 0 Z -axis and the X -axis can be derived. Figure 1. (a) The radar system (b) Space vector Z -axis concerning the 0 S system. Figure 1. (a) The radar system. (b) Space vector Z-axis concerning the S 0 system. The reference system S 0 is embedded in radar A and centered in O 0 . The X 0 -axis is aligned with the LOS of radar A. Radar B is located in the plane (X 0 , Y 0 ). The reference system S is embedded in the target and centered in O. In practical conditions, external forces on the target produce angular motions that are represented by the angular rotation vector Ω, which is aligned with the Z-axis. Its projection onto the plane orthogonal to LOS of radar A defines the effective rotation component Ω e f f , which forms the Z 1 -axis. The plane (Y 1 , X 1 ) is the imaging plane, whose axes correspond to range and the cross-range dimensions. For an easier understanding, the radar coordinate system and target rotation axis are abstractly represented in Figure 1b. α is the angle between the target rotation axis and X 0 -axis, marked as ARX, which can influence the projection and calibration results. β is the angle between the target rotation axis and the Y 0 -axis, marked as ARY. Without loss of generality, we assume that the X 1 -axis is aligned with X 0 -axis and the Y-axis is aligned with Y 1 -axis. According to right-handed coordinates, the Z 0 -axis and the X-axis can be derived. Moreover, we consider the target as a rigid body moving toward the Z-axis concerning system S. The target is composed of point-like scatterers, and the position of scatterer P with respect to S is where Ω = |Ω|, and t m denotes slow time. The angle between the Z-axis and the Z 0 -axis is θ, which can be derived as where the Z 1 -axis is the projection result of the Z-axis on the plane (Y 0 , Z 0 ), cos θ = 1 − cos 2 α − cos 2 β. The angular rotation vector Ω with the radar coordinate system can be expressed as Concerning the transition coordinate system, the target coordinate system is represented as With respect to the radar coordinate system, the transition coordinate system is represented as The relationship between these three coordinate systems can be formulated as where Therefore, the transformation from the target coordinate system into the radar coordinate system can be formulated as where cos 2 β < sin 2 α, α ∈ (0, π/2), and ξ i , i = 1, 2, 3 can be expressed as ξ 3 = cos α cos θ sin α (x cos(wt m ) − y sin(wt m )) + cos β sin α (x sin(wt m ) + y cos(wt m )) − (z + vt m ) cos θ (15) Remote Sens. 2020, 12, 2699 5 of 17 According to Figure 1, the location of radar A can be expressed as D A,S 0 = [0, 0, 0]. The distance from radar A to scattering point P can be expressed by The first-order Taylor expansion of (16) is carried out, and (16) can be rewritten as Substituting (x, y, z) into (14), (17) can be rewritten as The location of radar B can be expressed as follows: The distance from scattering point P to radar B is R B,P (t m ), which can be expressed as Suppose that the pulse of the linear frequency modulated signal at a pulse repetition frequency of PRF is transmitted by radar A. After preprocessing, including de-modulation to the baseband and range compression, the processed echo data of scattering point P can be expressed as where i = A, B is the radar index, σ p is the complex scattering coefficient of scattering point P, t is the fast time in the range dimension, f c is the center frequency of the transmitted signal, c is the transmitting velocity of the electromagnetic wave, and W is the spectrum bandwidth of the transmitted signal.

Theory and Method
The proposed method is shown in Figure 2. After receiving the target echoes from two arbitrary radars, motion compensation was accomplished first. Then, through scattering point extraction, the ISAR image was converted into a scatterer set.
The process in the yellow rectangle was used to match scattering point sets and 3D imaging. First, the height of each scatterer was set as one. Meanwhile, a coarse scatterer matching in RANSAC was carried out. Based on the preliminary transformation, one could estimate ARX and ARY. Then, the scatterer height was obtained based on the preliminary ARX and ARY. After scatterer height estimation, affine transformation with estimated scatterer height was performed. The estimation of ARX, ARY, and scatterer height was executed iteratively. The iteration was stopped when the estimation of ARX and ARY varied slightly. Finally, an accurate ARX and ARY was derived from a small range of searching. The operations in the red rectangle will be described in detail in the rest of this section.  The process in the yellow rectangle was used to match scattering point sets and 3D imaging. First, the height of each scatterer was set as one. Meanwhile, a coarse scatterer matching in RANSAC was carried out. Based on the preliminary transformation, one could estimate ARX and ARY. Then, the scatterer height was obtained based on the preliminary ARX and ARY. After scatterer height estimation, affine transformation with estimated scatterer height was performed. The estimation of ARX, ARY, and scatterer height was executed iteratively. The iteration was stopped when the estimation of ARX and ARY varied slightly. Finally, an accurate ARX and ARY was derived from a small range of searching. The operations in the red rectangle will be described in detail in the rest of this section.

Scattering Point Sets Matching based on RANSAC and Affine Transformation
After motion compensation, the images from radar A and radar B can be expressed as where x y are defined as SPA and SPB, respectively. According to (16) to (22), SPA can be expressed as

Scattering Point Sets Matching based on RANSAC and Affine Transformation
After motion compensation, the images from radar A and radar B can be expressed as where i = A, B, ϕ i denotes the corresponding radar residual phase, and x i and y i denote scattering location in the corresponding image i. For a short observation time, cos(wt m ) can be approximated as 1, and sin(wt m ) can be approximated as wt m . After scattering point extraction, two scattering point sets can be achieved: scattering point set A (x A , y A ) with N A scatterers and scattering point set B (x B , y B ) with N B scatterers. For the convenience of expression, (x A , y A ) and (x B , y B ) are defined as SPA and SPB, respectively. According to (16) to (22), SPA can be expressed as Similarly, SPB can be expressed as Remote Sens. 2020, 12, 2699 7 of 17 Therefore, the relationship between SPA and SPB can be achieved by where U ∈ R 3×3 , the element of U is u i , i = 1, 2, ..., 6, u 1 = cos ϕ − cos α/sin 2 α cos β sin ϕ, u 2 = − sin ϕcos θ/sin 2 α, u 3 = cos β sin ϕ 1 + cot 2 α , u 4 = −u 2 , u 5 = u 1 , and u 6 = − cos α sin ϕcos θ/sin 2 α. Through the analysis of (27), one can find that angles α, β, ϕ can cause rotation, scaling, and shift for image B. Because u 5 = u 1 , the scaling in the range and azimuth dimensions is the same. Furthermore, z for each scattering point is different. This means, for range and azimuth dimensions, each scattering point has a different shift. Therefore, compared with image A, rotation, scaling, and shift appeared in image B. In order to estimate preliminary ARX and ARY, assume that z of all the scatterers is one at the start time. According to RANSAC, assume the total estimation number is K. In the kth estimation, three pairs of scatterers are chosen randomly to form an affine transformation, expressed as By solving (28), U k is estimated. The converted SPA (x B , y B ) can be obtained by Scatterer matching can be performed based on the minimum Euclidean distance between the converted SPA and SPB. For the ith scattering point in image A, the matched scattering point can be expressed as where i and j are the index of scattering point in image A and image B, respectively. j i is the matched scattering point index. Assume the number of matched scatterers is N k . Then, the preliminary scatterer matching result is

Estimation of ARX and ARY
After the process of RANSAC, the rough relationship U between the scattering point set can be obtained. To estimate ARX and ARY from U, we let Remote Sens. 2020, 12, 2699 8 of 17 Combining (32) and (33), we have sin α = (k 2 2 + k 1 2 + 1) ± (k 2 2 + k 1 2 + 1) 2 − 4k 2 2 2k 2 2 (34) cos β = k 1 sin 2 α cos α Due to the limitation that sin α is less than 1, then (34) can be expressed as Therefore, ARX and ARY can be derived by Therefore, we let The final estimation of ARX and ARY can be expressed as

Estimation of Scatterer Height and Affine Transformation Considering Scatterer Height
The difference of each scatterer can be represented as According to (25) and (26), the different shift can be rewritten by Remote Sens. 2020, 12, 2699 9 of 17 Substituting α and β into (25) and (26), the estimated target height can be expressed as Combining with the estimated z, the relationship between SPA and SPB can be rewritten as Then, (46) can be rewritten as b = Aµ where b = x B 1 , · · · , x B Nd , y B 1 , · · · , y B Nd T , µ = [u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ] T , and A is the middle part of (31). After QR decomposition, A can be expressed as Because Q is an orthogonal matrix, then QQ T = I, where I is a unit matrix, (•) T denotes transpose, and R denotes the upper triangular matrix. After computing, µ can be derived by Based on the above analysis, the recalibration of the range and azimuth dimensions can be expressed as Combining with the estimated z, the target 3D scattering image (x, y, z) is obtained.

Simulations
In this section, to demonstrate the effectiveness and superiority of the proposed algorithm, some experiments based on two different simulation datasets are conducted. To verify the capability of the proposed method, the first experiment compared the modified SIFT algorithm from [36] and the proposed method based on point scattering center theory. Because real data are hard to obtain, a second experiment was conducted based on CADFEKO software, by which the simulated echo was similar to the real one. For these two experiments, the radar parameters were the same. The radar operated in the Ku-band. The signal bandwidth was 2 GHz.

Experiment on Point Scattering Simulation Data
To compare the modified SIFT with the proposed method, a point simulated satellite was used. The system parameters are listed below in Table 1. Figure 3 shows the key points extracted with respect to modified SIFT and dominant scatterers with respect to ISAR. Comparing Figure 3a,c, one can see that due to the different principles of feature scatterer extraction, the feature point with respect to SIFT was different from the dominant scatterer with respect to ISAR. Figure 4 shows the matched feature Remote Sens. 2020, 12, 2699 10 of 17 point. The number of matched feature points with respect to SIFT was less than the one from the proposed method. Due to different imaging mechanisms, ISAR images do not have strict gray-scale similarity as optical images. Additionally, the modified SIFT feature description vectors are too harsh on the ISAR image. This results in a great reduction of matching points.

Experiment on Point Scattering Simulation Data
To compare the modified SIFT with the proposed method, a point simulated satellite was used. The system parameters are listed below in Table 1. Figure 3 shows the key points extracted with respect to modified SIFT and dominant scatterers with respect to ISAR. Comparing Figure 3a,c, one can see that due to the different principles of feature scatterer extraction, the feature point with respect to SIFT was different from the dominant scatterer with respect to ISAR. Figure 4 shows the matched feature point. The number of matched feature points with respect to SIFT was less than the one from the proposed method. Due to different imaging mechanisms, ISAR images do not have strict gray-scale similarity as optical images. Additionally, the modified SIFT feature description vectors are too harsh on the ISAR image. This results in a great reduction of matching points.

Experiment on Point Scattering Simulation Data
To compare the modified SIFT with the proposed method, a point simulated satellite was used. The system parameters are listed below in Table 1. Figure 3 shows the key points extracted with respect to modified SIFT and dominant scatterers with respect to ISAR. Comparing Figure 3a,c, one can see that due to the different principles of feature scatterer extraction, the feature point with respect to SIFT was different from the dominant scatterer with respect to ISAR. Figure 4 shows the matched feature point. The number of matched feature points with respect to SIFT was less than the one from the proposed method. Due to different imaging mechanisms, ISAR images do not have strict gray-scale similarity as optical images. Additionally, the modified SIFT feature description vectors are too harsh on the ISAR image. This results in a great reduction of matching points.     Figure 5 shows the fusion images. Different from modified SIFT, the fusion image from the proposed method is a scatterer image. The line in Figure 5b shows the matched scatterer pairs. One can see that almost all the dominant scatterers are matched in Figure 5b. Because U can represent the relationship between two images, the estimated RMSE of U is used to measure the performance of the algorithm. The RMSE is computed according to where U i is the estimated U 0 , and U 0 is the real value. Using different signal-to-noise ratio (SNR) (−10, −5, 0, 5)dB, 200 Monte Carlo experiments were carried out. The RMSE of the estimated U is shown in Figure 6a. The performance of the modified SIFT was better as the SNR improved. The proposed method was not affected by the noise in this range. This is because the dominant scatterer extraction algorithm is highly precise. A precise dominant scatterer extraction is the foundation of the proposed method. After feature points matching, the target 3D image can be obtained, as shown in Figure 6b. A comparison of estimated target attitude and the setting target attitude to radar A is shown in Figure 7.
where i U is the estimated 0 U , and 0 U is the real value. Using different signal-to-noise ratio (SNR) ( 10,5,0,5)dB   , 200 Monte Carlo experiments were carried out. The RMSE of the estimated U is shown in Figure 6a. The performance of the modified SIFT was better as the SNR improved. The proposed method was not affected by the noise in this range. This is because the dominant scatterer extraction algorithm is highly precise. A precise dominant scatterer extraction is the foundation of the proposed method. After feature points matching, the target 3D image can be obtained, as shown in Figure 6b. A comparison of estimated target attitude and the setting target attitude to radar A is shown in Figure 7.
where i U is the estimated 0 U , and 0 U is the real value. Using different signal-to-noise ratio (SNR) ( 10, 5, 0,5)dB − − , 200 Monte Carlo experiments were carried out. The RMSE of the estimated U is shown in Figure 6a. The performance of the modified SIFT was better as the SNR improved. The proposed method was not affected by the noise in this range. This is because the dominant scatterer extraction algorithm is highly precise. A precise dominant scatterer extraction is the foundation of the proposed method. After feature points matching, the target 3D image can be obtained, as shown in Figure 6b. A comparison of estimated target attitude and the setting target attitude to radar A is shown in Figure 7.

Simulation based on CADFEKO Software
To verify that the proposed method is suitable for the radar echo close to the real data, an experiment based on CADFEKO software was carried out. The target was a simulated satellite, as shown in Figure 8a. The length and width of the solar panel were 1m and 0.28m, respectively. In the

Simulation based on CADFEKO Software
To verify that the proposed method is suitable for the radar echo close to the real data, an experiment based on CADFEKO software was carried out. The target was a simulated satellite, as shown in Figure 8a. The length and width of the solar panel were 1 m and 0.28 m, respectively. In the middle, there was a sphere, and the diameter was 0.4 m. The distance between the two solar panels was 0.6 m. The setting parameters are listed in Table 2. According to the setting parameters, one can see that the ARX was 80 • . However, ARY cannot be directly obtained from the system parameters, which need to be calculated. The unit vectors of radar A and radar B can be expressed as where ψ is the azimuth angle between radar A and radar B; φ is the azimuth angle of radar A. Because the O 0 X 0 Y 0 plane is made of → n A and → n B , the Z 0 -axis can be expressed as Then, the Y 0 -axis can be expressed as ARY can be represented by where → n Y 0 (3) is the third element of → n Y 0 . β is influenced by ψ and α, and the relationship is shown in Figure 8b. Because the pitch angle of two radars is the same, β is greater than 90 • and less than 115 • . In addition, ϕ cannot be obtained from the preset parameters. The CADFEKO model is abstractly represented in Figure 9a. For the convenience of analysis, the red part of Figure 9a is shown in Figure 9b. ϕ can be obtained by where cos γ = sin θ sin(ψ/2 where  is the azimuth angle between radar A and radar B;  is the azimuth angle of radar A. Because the 0 0 0 O X Y plane is made of is the third element of 0 Y n .  is influenced by  and  , and the relationship is shown in Figure 8b. Because the pitch angle of two radars is the same,  is greater than 90° and less than 115°. In addition,  cannot be obtained from the preset parameters. The CADFEKO model is abstractly represented in Figure 9a. For the convenience of analysis, the red part of Figure 9a is shown in Figure 9b.  can be obtained by       Through scattering center extraction, the extracted SPA and SPB are shown in Figure 10. Figure 10b shows eight edge points with strong sidelobes. Due to the difference in the angle of incidence and the degree of position occlusion at the edge points, its scattering intensity also changed, two of which were significantly larger than the other six. There were also scattering points between the eight edges. After RANSAC, the converted SPA and SPB are plotted in Figure 11b. Since the influence of noise on RANSAC is relatively small, the influence of sidelobes in SPB can be removed after preliminary scattering point matching. After iteration, the target 3D image can be obtained, as shown in Figure 12a. The estimated length of the solar panel was 2.56m. The estimated ARX and ARY are listed in Table 3. According to the estimated ARX and ARY, the 3D reconstruction with respect to the radar coordinate system is shown in Figure 12b, where two solar panels are identified. The three views of Figure 12b are shown in Figure 12c-e. Compared with SPA in Figure 11a, the front view of Figure 12c is bigger and upside down. According to (50) and (51), the top view shows x and y in the radar coordinate system. Figure 11a shows x A and y A , which is the projection of target to the image plane. This image plane is different from the (X 0 , Y 0 ) plane. Through scattering center extraction, the extracted SPA and SPB are shown in Figure 10. Figure  10b shows eight edge points with strong sidelobes. Due to the difference in the angle of incidence and the degree of position occlusion at the edge points, its scattering intensity also changed, two of which were significantly larger than the other six. There were also scattering points between the eight edges. After RANSAC, the converted SPA and SPB are plotted in Figure 11b. Since the influence of noise on RANSAC is relatively small, the influence of sidelobes in SPB can be removed after preliminary scattering point matching. After iteration, the target 3D image can be obtained, as shown in Figure 12a. The estimated length of the solar panel was 2.56m. The estimated ARX and ARY are listed in Table 3. According to the estimated ARX and ARY, the 3D reconstruction with respect to the radar coordinate system is shown in Figure 12b, where two solar panels are identified. The three views of Figure 12b are shown in Figure 12c-e. Compared with SPA in Figure 11a, the front view of Figure 12c is bigger and upside down. According to (50) and (51), the top view shows x and y in the radar coordinate system. Figure 11a shows A x and A y , which is the projection of target to the image plane. This image plane is different from the

Conclusions
In this paper, ISAR image matching and three-dimensional (3D) scattering imaging based on extracted dominant scatterers are studied. Compared with 3D imaging based on an ISAR sequence, the proposed method needs two arbitrary radar observations in a short time. Different from SIFT, the matching process in the proposed method is based on the location of scattering points. Without harsh feature descriptions, the matched scatterers are greatly increased. First, taking advantage of the sparsity of the ISAR image, the radar images are transformed into scattering point sets by scattering the center extraction method. Then, coarse scatterer matching can be achieved by RANSAC. The scatterer height and accurate affine transformation parameters are estimated iteratively to obtain fine scatterer matching. Based on matched scatterers, the target 3D image, ARX, and ARY can be obtained. Finally, experiments based on electromagnetic simulation software CADFEKO have been conducted to demonstrate the effectiveness of the proposed algorithm.