A Multi-Perspective 3D Reconstruction Method with Single Perspective Instantaneous Target Attitude Estimation

: Due to the limited information of two-dimensional (2D) radar images, the study of three-dimensional (3D) radar image reconstruction has received signiﬁcant attention. However, the target attitude obtained by the existing 3D reconstruction methods is unknown. In addition, using a single perspective, one can only get 3D reconstruction result of a simple target. For a complex target, due to occlusion and scattering characteristics, 3D reconstruction information obtained from a single perspective is limited. To tackle the above two problems, this paper proposes a new method for multi-perspective 3D reconstruction and single perspective instantaneous target attitude estimation. This method consists of three steps. First, the result of 3D reconstruction with unknown attitude is obtained by the traditional matrix factorization method. Then, in order to obtain the attitude of a target 3D reconstruction, additional constraints are added to the projection vectors which are computed from the matrix factorization method. Finally, the information from di ﬀ erent perspectives are merged into a single layer information according to certain rules. After the information fusion, a multi-perspective 3D reconstruction structure with better visibility and more information is obtained. Simulation results have proved the e ﬀ ectiveness and robustness of the proposed method.


Introduction
Inverse synthetic aperture radar (ISAR) imaging has been widely used in military and civil areas due to day-and-night and weather-independent capability [1][2][3][4][5][6].Large bandwidth and wide target rotational angle are usually needed to acquire high-resolution two-dimensional (2D) image.It is known that a 2D ISAR image is the projection of a three-dimensional (3D) target onto an imaging plane.However, the information obtained from the 2D image about the 3D target is limited.Since the target information, such as structure, size and attitude, can be obtained from the 3D image, the target 3D imaging has received significant attention in recent years .In addition, the target 3D image plays an important role in radar automatic target classification and identification.The existing 3D imaging methods can be roughly categorized into three groups based on the number of antennas: interferometric ISAR (InISAR), direct 3D ISAR imaging and 3D reconstruction, as follows: The InISAR employs at least two antennas placed in special positions [10][11][12][13][14][15][16][17][18][19][20][21].Although the InISAR system can easily obtain the 3D image without prior knowledge of the target motion, it is of relatively high cost and hardware complexity for a single radar.
Direct 3D ISAR imaging is a direct extension of the conventional ISAR [22].The 3D distributions of the scatterers of a target are extracted directly from the radar echoes.However, the accuracy of the third dimension information obtained by this type of methods is usually low.
3D reconstruction uses a sequence of ISAR images to obtain a target's 3D structure by performing matrix factorization method on the scatterer trajectory matrix [23][24][25][26][27][28].3D reconstruction is usually based on the conventional monostatic ISAR, which usually has lower cost than the InISAR and higher accuracy than the direct 3D ISAR imaging.The advantage of this kind of methods is that no prior information about the geometric structure of the target is needed.However, the implementation process is relatively complex.For this kind of methods, accurate scatter extraction and association are necessary to generate accurate trajectory matrix, which is the foundation for high resolution 3D reconstruction.Fortunately, the scatter extraction methods, such as, rotational invariance techniques [29,30] and the modified orthogonal relax method [31], have been well studied.The scatter tracking methods, such as, Kalman tracking and the Markov chain Monte Carlo (MCMC) data association [32,33], can also be performed well.
For 3D reconstruction, the target 3D geometry structure is reconstructed by forming and factorizing the trajectory matrix which consists of range and cross-range coordinates of target scatterers.The range scaling can be easily accomplished by using the predetermined radar system parameters, while it is time-consuming to perform cross-range scaling to all the sub-images.In order to reduce the computational complexity, [24,25] only use the distance trajectory to get the shape of a target, instead of the size of a target, but it will lead to scale ambiguity because of the lack of cross-range scaling.To solve this problem, an innovative method is proposed in [34].In [34], the multi-view radar image sequences are combined with the optical images to obtain the target 3D structure.However, this method needs to perform orientation calibration, which is time-consuming.To reduce the computational burden of cross-range scaling, [26] performs the cross-range scaling and the matrix factorization repeatedly to achieve accurate 3D geometry reconstruction.Nevertheless, this method is only suitable for targets with linear changes in speed and the attitude of the 3D geometry reconstruction is unknown.
Considering the trade-off between time cost and accuracy, 3D reconstruction is a promising method to obtain a target 3D image.However, the aforementioned methods, called the traditional methods, are only applicable to single perspective, i.e., the Line of Sight (LOS) is fixed.For the traditional 3D reconstruction, there is a nonsingular square matrix between the projection matrices and the 3D structure of the target obtained by matrix factorization.The matrix factorization does not affect the shape and the size of the target, but it will change the target attitude obtained by the traditional 3D reconstruction.Thus, for the traditional methods, the target attitude is unknown and the reconstruction attitude may contain arbitrary 3D rotation matrices.Due to this, it is difficult to obtain the detailed information about the shape of the target and the change of the target attitude by combining several perspectives from different radars in the 3D reconstruction.
To deal with the aforementioned problems, this paper proposes a method to perform multi-perspective 3D reconstruction and estimate the target attitude from the instantaneous 3D reconstruction result.Based on the traditional 3D reconstruction method, additional constraints are added to the projection vectors to obtain the target attitude.After combining different perspective 3D reconstruction geometries, a multi-perspective 3D reconstruction result is obtained.Since the scatterers of a target obtained from different perspectives are usually different, it is necessary to adjust the attitude to a unified perspective through attitude estimation.Then, the position information of scattering points from different perspectives is fused to obtain a set of target scattering points.By using more perspectives, multi-perspective 3D reconstruction can obtain more information about the target and has a better visibility.The proposed method can be regarded as an extension of the 3D reconstruction.
Compared with InISAR, parameters of each radar and radar layout are not required in the proposed method.All active radars can independently perform 3D reconstruction.Combining 3D reconstruction results from these radars, a more complete target can be obtained according to the proposed method.
The remainder of this paper is organized as follows.Section 2 introduces the 3D projection model.Section 3.1 introduces the matrix factorization method briefly.Section 3.2 analyzes the reason why the target attitude of the traditional 3D reconstruction is uncertain and introduces a method to solve the problem.Section 3.3 is the process of information combination to form multi-perspective 3D reconstruction of a target.Section 3.4 is the algorithm summary.Section 4 shows some simulation results to validate the effectiveness of the proposed method.Section 5 concludes this paper.

Signal Model
In this section, before the discussions start, the following assumptions should be clarified, Long-time and wide-angle echoes of the steadily moving target can be obtained.The translational motion can be compensated using the existing methods [35].In sub-aperture processing, the motion of the LOS can be modeled as a constant value.The dominant scatterers can be obtained and the trajectory can be formed to perform 3D reconstruction.
For wide-angle target echoes, it is necessary to divide the raw data into overlapped sub-apertures to perform translational motion compensation and dominant scatter centers extraction.Assume that the total number of sub-apertures is K.The motion that comes from either a target or a radar, after translational motion compensation, can be molded as the radar rotation around a stationary target, which is shown in Figure 1a.In the kth sub-image, the unit projection vectors α k and β k in the imaging plane are shown in Figure 1b, where α k and β k represent range and azimuth dimensions in the kth imaging plane, respectively.Compared with InISAR, parameters of each radar and radar layout are not required in the proposed method.All active radars can independently perform 3D reconstruction.Combining 3D reconstruction results from these radars, a more complete target can be obtained according to the proposed method.
The remainder of this paper is organized as follows.Section 2 introduces the 3D projection model.Section 3.1 introduces the matrix factorization method briefly.Section 3.2 analyzes the reason why the target attitude of the traditional 3D reconstruction is uncertain and introduces a method to solve the problem.Section 3.3 is the process of information combination to form multi-perspective 3D reconstruction of a target.Section 3.4 is the algorithm summary.Section 4 shows some simulation results to validate the effectiveness of the proposed method.Section 5 concludes this paper.

Signal Model
In this section, before the discussions start, the following assumptions should be clarified, Long-time and wide-angle echoes of the steadily moving target can be obtained.The translational motion can be compensated using the existing methods [35].In sub-aperture processing, the motion of the LOS can be modeled as a constant value.The dominant scatterers can be obtained and the trajectory can be formed to perform 3D reconstruction.
For wide-angle target echoes, it is necessary to divide the raw data into overlapped sub-apertures to perform translational motion compensation and dominant scatter centers extraction.Assume that the total number of sub-apertures is K .The motion that comes from either a target or a radar, after translational motion compensation, can be molded as the radar rotation around a stationary target, which is shown in Figure 1 The established coordinate system is centered at the target.Assume that the target contains Q scattering centers, and the position of the q th scatterer is denoted as , where () T represents the transpose.For a maneuvering motion, the relative rotation velocity of LOS varies with sub-aperture, denoted as k w , ( 1,2,..., )   kK  . The rotation direction is the same as the Z axis, denoted as  .During the observation time, the initial azimuthal angle of the LOS is  , the elevation angle of the LOS remains constant, denoted as  , which can be estimated by searching and will be described later.The established coordinate system is centered at the target.Assume that the target contains Q scattering centers, and the position of the qth scatterer is denoted as p q = x q , y q , z q T , (q = 1, 2, 3, . . ., Q), where (•) T represents the transpose.For a maneuvering motion, the relative rotation velocity of LOS varies with sub-aperture, denoted as w k , (k = 1, 2, . . ., K).The rotation direction is the same as the Z axis, denoted as Ω.During the observation time, the initial azimuthal angle of the LOS is ϕ, the elevation angle of the LOS remains constant, denoted as θ, which can be estimated by searching and will be described later.In the kth sub-image, the unit projection vector of LOS in range dimension is formulated as where t k denotes the middle time in the kth sub-image.The unit projection vector in azimuth dimension is represented as, where × denotes cross product.In general, the 2D radar image is the projection of the 3D target onto the imaging plane.Suppose that the qth scattering center at the kth sub-image imaging plane is represented T , where u kq and v kq are the projection locations in range and azimuth directions, respectively.Then the p kq can be computed as, The projection results at all times can be represented in a matrix form as, where W ∈ R 2K×Q denotes the trajectory matrix of the scattering center locations in 2D image plane.W kq denotes the element at the kth row and the qth column of W given by, P ∈ R 3×Q , and P = [ p 1 , p 2 , . . ., p q , . . ., p Q ], Υ is modeled by Gaussian noise, E ∈ R 2K×3 denotes the projection matrix represented as, where E k denotes the kth row of E. Equation (4) reveals the projection process from 3D space to 2D space.

Traditional 3D Geometry Reconstruction Based on Factorization Method
In order to illustrate the problem, We give a brief description about the process of 3D reconstruction based on matrix factorization [25,26].
First, the singular value decomposition (SVD) of W is where 4), it can be seen that the rank of the trajectory matrix W is 3 under the noiseless scenario.Although the rank of W may not be strictly equal to 3 when there is noise, Σ is a diagonal matrix with three dominant singular values.Let Σ ∈ R 3×3 collect the elements in the first three columns and three rows of Σ, U ∈ R 2K×3 denotes the first three columns of U, V ∈ R Q×3 denotes the first three columns of V. Then the projection matrix E and scatterer 3D position P can be estimated as where , and P represent the estimates of E and P, respectively, where ξ k ∈ R 3×1 and γ k ∈ R 3×1 , A ∈ R 3×3 is a nonsingular matrix used to adjust Ê to conform to the properties of projection matrix.Since γ k and ξ k appear in pairs, they are called projection vector pairs in this paper.They satisfy the following constraints Finally, matrix A and projection vector pairs can be obtained by solving the equations in (9).Therefore, the target 3D geometry reconstruction, P, can be obtained by (8).The detailed description of 3D geometry reconstruction based on factorization method can be found in [25,26].
In a given coordinate system, the scatter positions, i.e., P, is defined as the attitude of the target in this paper, which can be used to estimate the motion of the target.Although the target attitude P is estimated in (8), there are too many solutions for P due to the insufficiency of the constraints in (9).Therefore, the estimate of the target attitude P is still undetermined.More details are given in the following.

Analysis and Estimation of 3D Reconstruction Attitude
The constraints in (9) can only ensure that the projection vector pairs γ k and ξ k are 3 × 1 orthogonal unit vectors.However, the number of the orthogonal unit vector pairs is infinite.In particular, there is an arbitrary 3D rotation matrix which is called an ambiguity rotation matrix between these pairs, i.e., for any 3 by 3 unitary matrix C k with C k T C k = I, γ T k C k and ξ T k C k also satisfy (8) for the estimated attitude P.
Ignoring the noise, (4) can be rewritten as where W k is the kth row of W. Therefore, the existence of the ambiguity rotation matrix C k can arbitrarily change the target attitude retrieved by the traditional 3D reconstruction.Due to the ambiguity rotation matrix C k , the projection matrix and the corresponding target attitude can be represented as where E k and P k are, respectively, the projection matrix and the target attitude after changed by an arbitrary 3D rotation matrix C k .Under the conditions in ( 9), (10) holds, the arbitrary 3D rotation matrix C k at each sub-image is different, and the attitude obtained at each sub-image is also different.The projection matrix and target attitude at the kth sub-image can be formulated as In order to analyze and solve the target attitude estimation problem, the following content is divided into three parts.They are the characteristics analysis of the arbitrary 3D rotation matrix C k , the influence of C k on the projection vector pairs and the influence on the estimated target attitude.
Under the influence of the ambiguity rotation matrix C k , the attitude of the target is unknown.Considering the motion of the LOS, by transforming projection vectors ξ k and γ k into projection directions α k and β k , respectively, the target attitude can be changed into the same coordinate system.
The analysis above is based on the assumption that the LOS is in rotation motion and the target is stationary.However, it is more meaningful to observe the motion and attitude changes of the target rather than the radar.Therefore, the following analysis is based on the assumption that the target rotates around Z axis and the LOS is fixed.
Assume that the LOS is fixed in XOZ plane and has an angle θ with the Z axis.θ is unknown, but it will be estimated later.Then the corresponding unit projection vectors in range and azimuth directions can be expressed by α 0 and β 0 , respectively, as follows The matrix C k satisfies the following formula where c 1 , c 2 and c 3 denote the first to the third column vector of C k .To get the γ k and ξ k that are consistent with β 0 and α 0 , respectively, several additional constraints are added to the projection vector pairs in two steps.
In the first step, the constraint in the third dimension can be expressed by where ε is the intersection angle between vector ξ k and c 3 .Equation (15) means that γ k and c 3 are orthogonal to each other, and the intersection angle ε is equivalent to θ because ξ k and c 3 are unit vectors.
In the second dimension, the constraint can be written as Equation ( 16) means that ξ k and c 2 are orthogonal to each other, and γ k is equivalent to c 2 .
On the other hand, the ambiguity rotation matrix C k must conform to the traits of the rotation matrix.Considering ( 15) and ( 16), the additional constraints on projection vector pairs are The constraints in (17) can be divided into two parts, (17.1)-(17.4) and (17.5).By solving (17.1)- (17.4), two possible results of c 3 can be obtained.Since cos θ cannot distinguish θ and −θ, the vector c 3 distributes on both sides of ξ k with an angle θ, as shown in Figure 2a,b.The constraint in (17.5) can ensure that c 1 is a unit vector which is perpendicular to the plane of c 2 Oc 3 .In addition, the direction of c 1 can be upward or downward.Therefore, the constraint in (17.5) has another possible expression: Combining with the two possible solutions of 3 c , there are four possible solutions, denoted by , ( 1,2,3,4) kr Cr  can be written as   Combining with the two possible solutions of c 3 , there are four possible solutions, denoted by C k,r (r = 1, 2, 3, 4), for C k , which stands for four different results of C k .The relationship between the constraints and the corresponding vectors is illustrated in Figure 2. The illustration of Equation ( 17) corresponds to Figure 2a,b, and that of Equation ( 18) corresponds to Figure 2c,d.
Based on the above analysis, C k,r (r = 1, 2, 3, 4) can be written as where c k,j ( j = 1, 2, 3) denotes the jth column vector of denotes the rotation around the vector c k,2 .The relationship of these four cases of C k is as follows: from (19), one can see that c 2 is the same in all four cases, that is c 2 = γ k .c 1 in (19.1) and ( 19.3) has opposite directions.This is also true for (19.2) and (19.4).Comparing (19.1) and (19.4), there exists a rotation relationship for c 3 with rotation angle 2θ around c 2 .c 1 is obtained by the cross product of c 2 and c 3 .Because the difference of c 3 , c 1 in (19.1) and (19.4) is also different from each other.
Due to the four cases of C k , the estimated projection matrix and the estimated target attitude will have four corresponding results.These results are related to each other.Suppose that the estimated projection vectors in range and azimuth directions are denoted by ξ k and γ k , respectively.According to (14), we have The placement of projection vectors in range and azimuth directions α 0 and β 0 in the OXYZ coordinate system is shown in Figure 3a.
respectively.According to Error! Reference source not found., we have The placement of projection vectors in range and azimuth directions 0  and 0  in the OXYZ coordinate system is shown in Figure 3(a).C  on attitude P is the attitude which only rotates around Z axis.On the other side, the estimated attitude from (12.2) can be rewritten as where x , y and z denote the first to the third row vectors of P .In this paper, z is also called the third dimension of the target.Assume that the transformation result After solving the equations in (17), the target attitude can be represented as Combining (17.2), ( 19) and (20.1), one can see that the estimated projection vector in range direction has two values, expressed as ξ k and ξ k , respectively.Their angles with Z axis in the two sides of Z axis are θ, so one of them must be the same as α 0 .Suppose that ξ k is the same as α 0 , then there are four cases in the orthogonal coordinate system composed of ξ k , γ k and ξ k ,γ k , as shown in Figure 3b.In Figure 3b, only one case of C k makes the estimation projection vectors the same as that of (13), which is denoted as Ĉk,τ .The result of Ĉk,τ on attitude P is the attitude which only rotates around Z axis.
On the other side, the estimated attitude from (12.2) can be rewritten as where x, y and z denote the first to the third row vectors of P. In this paper, z is also called the third dimension of the target.Assume that the transformation result P 0 of C k,1 on P is taken as reference.Then, the effect of C k,r (r = 2, 3, 4) on P is as follows: C k,3 makes the first dimensional vector of P 0 reverse, the third dimension is consistent with that of P 0 .C k,4 makes P 0 rotate 2θ around γ k .The effect of C k,2 on P is as follows: first, P 0 rotates 2θ around γ k , then the first dimensional vector of the rotated result is reversed.The third dimension of the target attitude is the same as that of the attitude obtained from C k,4 .
Four possible results through the function of C k can be obtained at each time t k and there exists a certain relationship among them.In order to find the Ĉk,τ among these four cases, the following process is proposed.
After solving the equations in (17), the target attitude can be represented as where P k,i denotes the attitude after rotation through an ambiguity rotation matrix C k,r at t k .As analyzed before, the target attitude is rotating around Z axis and C k,r varies with time t k .Suppose that the ambiguity rotation matrix at the kth sub-image is to be estimated.For the kth and the mth sub-image, the third dimension of the target coordinate does not change.In addition, the elevation angle θ of the LOS is usually unknown.If we give an incorrect angle θ, then the third dimension of the estimated target attitude may vary with time as well.Then, the elevation angle θ is estimated by searching a range of angles so that a more accurate Ĉk,τ can be obtained which makes the third dimension difference smaller.Therefore, the estimation of Ĉk,τ can be computed by Ĉk,τ = arg min θ,r=1,2,3,4 where • F denotes the Frobenius norm, P k,r (3) denotes the third row vector of P k,r .Then the target attitude P k at the kth sub-image can be given by ( 22) with the rotation matrix Ĉk,τ in (23): Although Ĉk,τ is different at different sub-images, the attitude of several sub-images can be obtained by one perspective and it is not necessary to calculate the Ĉk,τ of each sub-image.

Joint Multi-Perspective 3D Reconstruction
Generally speaking, the number of target dominate scatterers varies with different perspectives due to anisotropy and occlusion of scattering centers.This makes it possible to improve the representation of target features, increase target information and improve target visibility by performing multi-perspective 3D reconstruction.In this paper, the 3D imaging results are illustrated in the form of point clouds.Suppose that the total number of the perspectives is F and these perspectives are from F radars.The azimuth and the elevation angle of these radar LOSs are different, as shown in Figure 4.In Figure 4, there is K sub-images in each perspective, and the number of scatterers and target attitudes obtained from each radar 3D imaging vary with the angle of view.Although these radar LOSs are different, the attitudes obtained from Section 3.2 are in the coordinate system with its own rotation axis as Z axis.Therefore, the target coordinates from these perspectives can coincide with each other by only one-dimensional rotation, that is, rotation around Z axis.The multi-perspective scattering fusion includes two parts.One is to find the target attitude relationship among different perspectives and the other is to fuse the point cloud based on the positions in different perspectives.

Target Attitude Relationship
In Section 3.2, it reveals that the target attitude relationship among different perspectives is a rotation around the Z axis.Therefore, a 3D point cloud matching among different perspectives is needed before information fusion.Assume that there is a sequence of F 3D images in F perspectives with different numbers of feature points.The number of the scatterers in the fth perspective is denoted as f Q determined by the dominant scatter center extraction methods [28][29][30].Then the scatterer coordinates are described as ,, Let

Target Attitude Relationship
In Section 3.2, it reveals that the target attitude relationship among different perspectives is a rotation around the Z axis.Therefore, a 3D point cloud matching among different perspectives is needed before information fusion.Assume that there is a sequence of F 3D images in F perspectives with different numbers of feature points.The number of the scatterers in the f th perspective is denoted as Q f determined by the dominant scatter center extraction methods [28][29][30].Then the scatterer coordinates are described as Let P i and P j denote target attitudes in the ith perspective and the jth perspective, respectively.In order to get the target attitudes P i and P j matched to each other, the target attitude P j needs to rotate an angle φ around the Z axis, described as x j,1 , y j,1 , z j,1 . . .
where R Z (φ) denotes the rotation around the Z axis: Then under the rotation angle φ, the average distance of scattering points from perspective j to perspective i can be described as where d i,j (q, n) denotes the minimum Mahalanobis distance among the ranges from the qth scatterer in the ith perspective to all the scatteres in the jth perspective, described as Finally, the distance between the ith and the jth perspectives can be obtained by solving G i,j (φ) = max g i,j (P i , P j (φ)) , g j,i (P j (φ), P i ) The closer the angle φ is to the true value, the smaller the distance G i,j (φ) is.The process of searching for φ is summarized as follows: Step 1: Initialize the rotation angle range [φ 1 , φ H ], H is a constant value that does not change in each iteration, r and N are pre-set thresholds set by experience, n = 0, G 0 = G(φ 1 ).
Step 3: The distance is computed and denoted as [G(φ 1 ), Step 4: If G(φ h 0 ) − G 0 < r and h 0 1, or n > N, then stop the iteration and go to Step 6.
Step 5: and go to Step 2.

Point Cloud Fusion
When all perspective images are transformed into the same coordinate system by using (27), the scatterers of different images will overlap, which will cause information redundancy.Then the position of the fused scattering point is expressed as , , Since several scatterers can be observed from each perspective, the number of scatterers obtained by multi-perspective 3D reconstruction will increase with the increase of the number of perspectives.The target integrity will be further improved.The integrity of multi-perspective 3D reconstruction is defined as the ratio of the number of scatterers detected to the total number of the target scatterers.

Algorithm Summation
After performing scattering center extraction and trajectory tracking, the trajectory matrix can be obtained.Then the factorization method is applied to the trajectory matrix.The target structure with unknown attitude is obtained from the traditional 3D reconstruction shown in Figure 5a.The matrix factorization may have a unitary ambiguity 3D rotation matrix C which in turn affects the target attitude of the traditional 3D reconstruction.Therefore, a further processing is adopted to estimate the unitary ambiguity 3D rotation matrix C to obtain the desired target attitude, as shown in Figure 5b.Finally, the multi-perspective 3D reconstruction can be obtained by multiple perspective information fusion.Because multi-perspective 3D reconstruction is performed on the 3D reconstruction results generated by each perspective, there is no requirement on the location of radars and the time of observation.Compared with InISAR, the multi-perspective 3D reconstruction is more flexible and less expensive than InISAR in terms of equipment requirements, although multiple radars may be needed.The proposed method is illustrated by flowchart in Figure 5c.

Simulations
In this section, several simulation results based on the point-scatterer model are presented to verify the effectiveness and noise insensitive of the proposed method.In the first experiment, a simple shape object is used to verify the accuracy of the proposed single perspective target attitude estimation.Meanwhile, the effectiveness of the proposed method is evaluated by comparing with the traditional 3D reconstruction.For a complex target, due to occlusion and scattering characteristics, the target obtained from a single perspective is relatively simple.In the second experiment, on the basis of the attitude estimation, the 3D reconstruction results from multiple perspectives are converted to one coordinate system and fused together.Then a more complex target can be obtained.

Simulations
In this section, several simulation results based on the point-scatterer model are presented to verify the effectiveness and noise insensitive of the proposed method.In the first experiment, a simple shape object is used to verify the accuracy of the proposed single perspective target attitude estimation.Meanwhile, the effectiveness of the proposed method is evaluated by comparing with the traditional 3D reconstruction.For a complex target, due to occlusion and scattering characteristics, the target obtained from a single perspective is relatively simple.In the second experiment, on the basis of the attitude estimation, the 3D reconstruction results from multiple perspectives are converted to one coordinate system and fused together.Then a more complex target can be obtained.

Single Perspective Attitude Estimation
An asymmetric target shown in Figure 6a is used to verify the accuracy of target attitude estimation of the proposed method, where 's n' denotes the nth scatterer.The simulated target consists of seven scatterers and rotates around the Z axis at 0.069 rad/s speed.

Remote Sens. 2019, 11, x FOR PEER REVIEW 13 of 19
An asymmetric target shown in Figure 6(a) is used to verify the accuracy of target attitude estimation of the proposed method, where 's n' denotes the nth scatterer.The simulated target consists of seven scatterers and rotates around the Z axis at 0.069 rad/s speed.The simulated radar operates in the X-band with transmitted signal bandwidth of 2 GHz and the range resolution is 0.075m.The azimuth and the elevation angle of the radar LOS are 0 and /4  rad, respectively.The total observation time is 16.5s and the corresponding rotational angle is 66 degrees which is divided into 100 overlapped sub-images.By extraction and tracking of these The simulated radar operates in the X-band with transmitted signal bandwidth of 2 GHz and the range resolution is 0.075m.The azimuth and the elevation angle of the radar LOS are 0 and π/4 rad, respectively.The total observation time is 16.5s and the corresponding rotational angle is 66 degrees which is divided into 100 overlapped sub-images.By extraction and tracking of these scatterers, the trajectory matrix is formed, as shown in Figure 6b.
The extraction and tracking errors of scatterers will influence the performance of the 3D reconstruction.To analyze the effect of trajectory error on the attitude estimation, Gaussian noises with the standard deviation of 0.5, and 5 times of the range resolution are added to the scatterer trajectory matrix to simulate scatterers extraction and tracking errors, respectively.The follow experiments take the results of the 50th sub-image as an example.After attitude estimation, the result of J(50, m, r) generated from the unitary ambiguity 3D rotation matrix C is shown in Figure 7, where horizontal coordinate is the index of sub-image.Figure 7a,b are obtained under the condition of Gaussian noises with the standard deviation of 0.5 and 5 times of the range resolution, respectively.Since taking the results of the 50th sub-image as an example, J(50, m, r) can achieve its smallest value when m is 50.In Figure 7, J(50, m, 2) is the smallest in all the sub-images.Therefore, it is sufficient to calculate (24) with a sub-image when the difference between m and 50 is large.Moreover, this shows that C 50,2 is the required unitary ambiguity 3D rotation matrix.The results of 3D reconstruction are shown in Figure 8, where different colors denote different scatterers, 's n' denotes the nth scatterer.The stars denote the real location at the corresponding time and the diamonds denote scatterer locations of the 3D reconstruction.In Figure 8a, the attitude of the traditional 3D reconstruction is varied arbitrarily because of the existence of the unitary ambiguity 3D rotation matrix.After attitude estimation, the reconstruction attitude is consistent with the real attitude as shown in Figure 8b.The simulated radar operates in the X-band with transmitted signal bandwidth of 2 GHz and the range resolution is 0.075m.The azimuth and the elevation angle of the radar LOS are 0 and /4  rad, respectively.The total observation time is 16.5s and the corresponding rotational angle is 66 degrees which is divided into 100 overlapped sub-images.By extraction and tracking of these scatterers, the trajectory matrix is formed, as shown in Figure 6(b).
The extraction and tracking errors of scatterers will influence the performance of the 3D reconstruction.To analyze the effect of trajectory error on the attitude estimation, Gaussian noises with the standard deviation of 0.5, and 5 times of the range resolution are added to the scatterer trajectory matrix to simulate scatterers extraction and tracking errors, respectively.The follow experiments take the results of the 50th sub-image as an example.After attitude estimation, the result of (50, , ) J m r generated from the unitary ambiguity 3D rotation matrix C is shown in Figure 7, where horizontal coordinate is the index of sub-image.Figure 7(a) and Figure 7(b) are obtained under the condition of Gaussian noises with the standard deviation of 0.5 and 5 times of the range resolution, respectively.Since taking the results of the 50th sub-image as an example, (50, , ) J m r can achieve its smallest value when m is 50.In Figure 7, (50, ,2) Jm is the smallest in all the sub-images.Therefore, it is sufficient to calculate Error!Reference source not found.with a sub-image when the difference between m and 50 is large.Moreover, this shows that 50 2 C ， is the required unitary ambiguity 3D rotation matrix.The results of 3D reconstruction are shown in Figure 8, where different colors denote different scatterers, 's n' denotes the nth scatterer.The stars denote the real location at the corresponding time and the diamonds denote scatterer locations of the 3D reconstruction.In Figure 8(a), the attitude of the traditional 3D reconstruction is varied arbitrarily because of the existence of the unitary ambiguity 3D rotation matrix.After attitude estimation, the reconstruction attitude is consistent with the real attitude as shown in Figure 8  To evaluate the performance of the proposed method, the root mean square error (RMSE) of 3D reconstruction is employed, which is defined as T e e e p E P trace P P P P Q (33) To evaluate the performance of the proposed method, the root mean square error (RMSE) of 3D reconstruction is employed, which is defined as where P e is the target 3D reconstruction result, P is the true target 3D geometry, Q is the number of target scatterers.As discussed above, the inaccurate extraction and tracking errors of the scatterers will degrade the performance of the proposed method.To obtain the RMSE of 3D reconstruction, 500 Monte-Carlo simulations are performed for each noise level ranging from 0.5 to 5 times of the range resolution.The experimental results are shown in Figure 9, which depicts the RMSE of the attitude estimation.Figure 9 shows that the accuracy of the proposed method decreases with the increase of noise level.To evaluate the performance of the proposed method, the root mean square error (RMSE) of 3D reconstruction is employed, which is defined as T e e e p E P trace P P P P Q (33) where e P is the target 3D reconstruction result, P is the true target 3D geometry, Q is the number of target scatterers.As discussed above, the inaccurate extraction and tracking errors of the scatterers will degrade the performance of the proposed method.To obtain the RMSE of 3D reconstruction, 500 Monte-Carlo simulations are performed for each noise level ranging from 0.5 to 5 times of the range resolution.The experimental results are shown in Figure 9, which depicts the RMSE of the attitude estimation.Figure 9 shows that the accuracy of the proposed method decreases with the increase of noise level.

Multi-Perspective 3D Imaging Results
Using the target attitude estimation based on single perspective 3D reconstruction, multi-perspective 3D reconstruction can be accomplished.The 3D geometry of the simulated scatterer model is presented in Figure 10, which consists of 165 scatterers and rotates around the Z axis at 0.069 rad/s speed.The radar parameters are set to be the same as the first experiment.Generally speaking, the number and locations of scattering centers of the same target are different in different perspectives.In this experiment, three perspectives are chosen.In each perspective, it randomly chooses different number of scatterers of the target as the observable scatterers.In addition, the elevation and azimuth angles of the LOS from different perspectives are also different from each other.

Multi-Perspective 3D Imaging Results
Using the target attitude estimation based on single perspective 3D reconstruction, multi-perspective 3D reconstruction can be accomplished.The 3D geometry of the simulated scatterer model is presented in Figure 10, which consists of 165 scatterers and rotates around the Z axis at 0.069 rad/s speed.The radar parameters are set to be the same as the first experiment.Generally speaking, the number and locations of scattering centers of the same target are different in different perspectives.In this experiment, three perspectives are chosen.In each perspective, it randomly chooses different number of scatterers of the target as the observable scatterers.In addition, the elevation and azimuth angles of the LOS from different perspectives are also different from each other.For the first perspective, the azimuth and the elevation angle of the radar LOS are 0 and 30 degrees, respectively.In this perspective, only 51 scatterers can be observed.These scatterers are mainly distributed in the main body and the left side of the airplane.For the second perspective, the azimuth and the elevation angle of the radar LOS are 30 and 25 degrees, respectively.In this perspective, only 58 scatterers can be observed.These scatterers are mainly distributed in the main For the first perspective, the azimuth and the elevation angle of the radar LOS are 0 and 30 degrees, respectively.In this perspective, only 51 scatterers can be observed.These scatterers are mainly distributed in the main body and the left side of the airplane.For the second perspective, the azimuth and the elevation angle of the radar LOS are 30 and 25 degrees, respectively.In this perspective, only 58 scatterers can be observed.These scatterers are mainly distributed in the main body and the right side of the airplane.For the third perspective, the azimuth and the elevation angle of the radar LOS are 45 and 20 degrees, respectively.In this perspective, only 54 scatterers can be observed.These scatterers are mainly distributed in the main body of the airplane.Figure 11 shows the variation of the RMSE of 3D reconstruction with respect to elevation angle of the radar LOS.In Figure 11a,b, the estimation deviations of these two elevation angles are 0. Figure 11c shows that the estimation deviation of the third perspective elevation angle is less than 0.25 degrees.When the elevation angle equals to the true value, the RMSE of 3D reconstruction achieves the smallest value.Figure 12 shows the variation of the estimated attitudes with respect to the different perspectives.In Figure 12, the stars denote the real target attitude at the corresponding time and the diamonds denote the 3D reconstruction attitude.Figure 13 shows the variation of J(50, m, r) with the sub-images.Figure 13a is obtained when the rotational velocity is 10 degrees per second and noise level is 5 times range resolution.In order to clearly observe the variation of J(50, m, r), Figure 13b magnifies the change near the 50th sub-image.As can be seen from Figure 13a,b, C 50,2 is the required unitary ambiguity 3D rotation matrix.Finally, by rotating different sub-perspective 3D images based on Section 3.3, these 3D images are placed in the same coordinate system.After 3D point cloud information fusion, multi-perspective 3D results can be obtained.For the first perspective, the azimuth and the elevation angle of the radar LOS are 0 and 30 degrees, respectively.In this perspective, only 51 scatterers can be observed.These scatterers are mainly distributed in the main body and the left side of the airplane.For the second perspective, the azimuth and the elevation angle of the radar LOS are 30 and 25 degrees, respectively.In this perspective, only 58 scatterers can be observed.These scatterers are mainly distributed in the main body and the right side of the airplane.For the third perspective, the azimuth and the elevation angle of the radar LOS are 45 and 20 degrees, respectively.In this perspective, only 54 scatterers can be observed.These scatterers are mainly distributed in the main body of the airplane.Figure 11 shows the variation of the RMSE of 3D reconstruction with respect to elevation angle of the radar LOS.In Figure 11(a) and Figure 11(b), the estimation deviations of these two elevation angles are 0. Figure 11(c) shows that the estimation deviation of the third perspective elevation angle is less than 0.25 degrees.When the elevation angle equals to the true value, the RMSE of 3D reconstruction achieves the smallest value.Figure 12 shows the variation of the estimated attitudes with respect to the different perspectives.Figure 12, the stars denote the real target attitude at the corresponding time and the diamonds denote the 3D reconstruction attitude.Figure 13 shows the variation of (50, , ) J m r with the sub-images.Figure 13(a) is obtained when the rotational velocity is 10 degrees per second and noise level is 5 times range resolution.In order to clearly observe the variation of (50, , ) J m r , Figure 13(b) magnifies the change near the 50th sub-image.As can be seen from Figure    In order to evaluate the performance of the proposed method, 500 Monte-Carlo simulations are carried out to evaluate the influence of rotational velocity and noise level on the proposed method.The three perspectives, same as those in Figure 11, are used.The noise level is increased from 0.5 to 5 times of the range resolution in increments of 1.5.The rotational speed is increased from 2 to 10 degrees per second in increments of 1.In these simulations, the total rotation is 180 degrees and the total number of the sub-images is 100.Suppose that the required estimated attitude is from the 50th sub-image.The experimental results are presented in Figure 14 and Figure 15.The corresponding attitudes obtained from the multi-perspective 3D images are shown in Figure 14.Apparently, the accuracy of attitude estimation in Figure 14(b) is worse than that in Figure 14(a).Figure 14(c) is obtained by performing 500 Monte-Carlo trails.It shows that the integrity of multi-perspective 3D reconstruction increases with the increase of the number of utilized perspectives, that is, the visibility is improved.In order to evaluate the performance of the proposed method, 500 Monte-Carlo simulations are carried out to evaluate the influence of rotational velocity and noise level on the proposed method.The three perspectives, same as those in Figure 11, are used.The noise level is increased from 0.5 to 5 times of the range resolution in increments of 1.5.The rotational speed is increased from 2 to 10 degrees per second in increments of 1.In these simulations, the total rotation is 180 degrees and the total number of the sub-images is 100.Suppose that the required estimated attitude is from the 50th sub-image.The experimental results are presented in Figures 14 and 15.The corresponding attitudes obtained from the multi-perspective 3D images are shown in Figure 14.Apparently, the accuracy of attitude estimation in Figure 14b is worse than that in Figure 14a.Figure 14c is obtained by performing 500 Monte-Carlo trails.It shows that the integrity of multi-perspective 3D reconstruction increases with the increase of the number of utilized perspectives, that is, the visibility is improved.In order to evaluate the performance of the proposed method, 500 Monte-Carlo simulations are carried out to evaluate the influence of rotational velocity and noise level on the proposed method.The three perspectives, same as those in Figure 11, are used.The noise level is increased from 0.5 to 5 times of the range resolution in increments of 1.5.The rotational speed is increased from 2 to 10 degrees per second in increments of 1.In these simulations, the total rotation is 180 degrees and the total number of the sub-images is 100.Suppose that the required estimated attitude is from the 50th sub-image.The experimental results are presented in Figure 14 and Figure 15.The corresponding attitudes obtained from the multi-perspective 3D images are shown in Figure 14.Apparently, the accuracy of attitude estimation in Figure 14(b) is worse than that in Figure 14(a).Figure 14(c) is obtained by performing 500 Monte-Carlo trails.It shows that the integrity of multi-perspective 3D reconstruction increases with the increase of the number of utilized perspectives, that is, the visibility is improved.velocity level per noise level.When the noise level is relative low, such as 0.5 and 2.0 times of the range resolution, the RMSE of multi-perspective 3D reconstruction tends to become slightly larger as the rotational velocity increases.When the noise level is larger than 5 times of the range resolution, the effect of the rotational velocity on the performance of the multi-perspective 3D reconstruction method is not dominated.Meanwhile, for the rotational velocity is fixed, the RMSE of the multi-perspective 3D reconstruction becomes larger as the noise level increases, which indicates that the noise level has a greater effect on the accuracy of multi-perspective 3D reconstruction.

Conclusion
In order to deal with the limitation of 2D imaging about a 3D target, we need to carry out 3D imaging by multi-perspective 3D reconstruction.However, the target attitude obtained by the traditional 3D imaging method is unknown.Moreover, single perspective 3D reconstruction cannot provide the complete information of a target, since several scatterers may be missed in a single perspective.To solve the problem, this paper proposes a new method for multi-perspective 3D imaging with single perspective instantaneous target attitude estimation.The target attitude of single perspective 3D reconstruction is estimated first.Then, the target attitudes obtained from different perspectives are converted into the same coordinate.Finally, the redundant information is merged into single-layer information and the final 3D imaging result can be obtained.The effectiveness and noise insensitive of the proposed method is verified by several numerical examples.Compared with the single perspective traditional 3D reconstruction, the multi-perspective 3D imaging increases the target information and improves the visibility.

Figure 1 .
Figure 1.Radar position and the projection vectors of the kth imaging plane.(a) The rotation relationship between the LOS and the target.(b) The two-dimensional projection vectors of the kth imaging plane.

Figure 1 .
Figure 1.Radar position and the projection vectors of the kth imaging plane.(a) The rotation relationship between the LOS and the target.(b) The two-dimensional projection vectors of the kth imaging plane.
stands for four different results of k C .The relationship between the constraints and the corresponding vectors is illustrated in Figure 2. The illustration of Equation Error!Reference source not found.corresponds to Figure 2(a) and Figure 2(b), and that of Equation.Error!Reference source not found.corresponds to Figure 2(c) and Figure 2(d).

Figure 2 .
Figure 2. Four different possible results of

3 c
locates at the left side of k  .(b) For

3 c
locates at the right side of k  .(c) For

1 c
is perpendicular to the plane 23 c Oc and downward.(d) For

1 c is perpendicular to the plane 23 c
Oc and upward.Based on the above analysis, , (

2 c 1 c
relationship of these four cases of k C is as follows: from Error! Reference source not found., one can see that is the same in all four cases, that is in (19.1) and (19.3) has opposite directions.This is also true for (19.2) and(19.4).Comparing (19.1) and(19.4),there exists a rotation relationship for 3 c with rotation

Figure 2 .
Figure 2. Four different possible results of C k .(a) For C k,1 , c 3 locates at the left side of ξ k .(b) For C k,2 , c 3 locates at the right side of ξ k .(c) For C k,3 , c 1 is perpendicular to the plane c 2 Oc 3 and downward.(d) For C k,4 , c 1 is perpendicular to the plane c 2 Oc 3 and upward.

Figure 3 .
Figure 3. Placement of projection vectors.(a) Placement of

C
on P is taken as reference.Then, the effect of ,( 2,3,4) kr Cr  on P is as follows: ,3 k C makes the first dimensional vector of 0 P reverse, the third dimension is consistent with that of 0 on P is as follows: first, 0 P rotates 2 around k  , then the first dimensional vector of the rotated result is reversed.The third dimension of the target attitude is the same as that of the attitude obtained from ,4 k C .Four possible results through the function of k C can be obtained at each time k t and there exists a certain relationship among them.In order to find the , ˆk C  among these four cases, the following process is proposed.

Figure 3 .
Figure 3. Placement of projection vectors.(a) Placement of α 0 and β 0 .(b) Placement of four cases of the estimated projection vectors.

Figure 4 .
Figure 4.The model of multiple perspectives.

P
denote target attitudes in the th i perspective and the th j perspective, respectively.In order to get the target attitudes i P and j P matched to each other, the target attitude j P needs to rotate an angle around the Z axis, described as

Figure 4 .
Figure 4.The model of multiple perspectives.

Figure 5 .
Figure 5. Algorithm of multi-perspective 3D reconstruction summary.(a) The traditional 3D reconstruction.(b) The procedure of attitude estimation for the f th perspective.(c) Multi-perspective 3D imaging.

Figure 6 .
Figure 6.The model of the simulated target.(a) The point-scatterer model.(b) The scatterer trajectories.

Figure 6 .
Figure 6.The model of the simulated target.(a) The point-scatterer model.(b) The scatterer trajectories.

Figure 6 .
Figure 6.The model of the simulated target.(a) The point-scatterer model.(b) The scatterer trajectories.

Figure 7 .
Figure 7.The effect of the unitary ambiguity 3D rotation matrix on the 50th sub-image.(a) The noise is 0.5 times of the range resolution.(b) The noise is 5 times of the range resolution.

Figure 7 .Figure 8 .
Figure 7.The effect of the unitary ambiguity 3D rotation matrix on the 50th sub-image.(a) The noise is 0.5 times of the range resolution.(b) The noise is 5 times of the range resolution.Remote Sens. 2019, 11, x FOR PEER REVIEW 14 of 19

Figure 8 .
Figure 8.The attitudes from the 3D reconstruction result with the standard derivation of the noise is set to be half of the range resolution.(a) The traditional 3D reconstruction.(b) The proposed 3D reconstruction.

Figure 8 .
Figure 8.The attitudes from the 3D reconstruction result with the standard derivation of the noise is set to be half of the range resolution.(a) The traditional 3D reconstruction.(b) The proposed 3D reconstruction.

Figure 9 .
Figure 9. RMSE of 3D reconstruction proposed in this paper after 500 Monta-Carlo experiments.

Figure 9 .
Figure 9. RMSE of 3D reconstruction proposed in this paper after 500 Monta-Carlo experiments.

Figure 10 .
Figure 10.The model of experiment.

Figure 10 .
Figure 10.The model of experiment.

2 CFigure 11 .
Figure 11.The estimations of the elevation angle of the three perspectives.(a)~(c) Variation of the RMSE of 3D reconstruction with respect to the elevation angle of the radar LOS in different perspectives.

Figure 11 .Figure 12 .
Figure 11.The estimations of the elevation angle of the three perspectives.(a)~(c) Variation of the RMSE of 3D reconstruction with respect to the elevation angle of the radar LOS in different perspectives.Remote Sens. 2019, 11, x FOR PEER REVIEW 16 of 19

Figure 12 .
Variation of estimated attitudes of 3D reconstruction from three different perspectives.(a)~(c) The attitudes obtained by the proposed method from different perspectives.

Figure 12 .Figure 13 .
Figure 12.Variation of estimated attitudes of 3D reconstruction from three different perspectives.(a)~(c) The attitudes by the proposed method from different perspectives.

Figure 14 .Figure 13 .
Figure 14.The estimated attitudes of multi-perspective 3D reconstruction from different conditions.(a) The estimated attitudes of multi-perspective when the rotational velocity is 2 degrees per second and noise level is 0.5 times range resolution.(b) The estimated attitudes of multi-perspective when the

Figure 12 .Figure 13 .
Figure 12.Variation of estimated attitudes of 3D reconstruction from three different perspectives.(a)~(c) The attitudes obtained by the proposed method from different

Figure 14 .Figure 14 .
Figure 14.The estimated attitudes of multi-perspective 3D reconstruction from different conditions.(a) The estimated attitudes of multi-perspective when the rotational velocity is 2 degrees per second and noise level is 0.5 times range resolution.(b) The estimated attitudes of multi-perspective when theFigure 14.The estimated attitudes of multi-perspective 3D reconstruction from different conditions.(a)The estimated attitudes of multi-perspective when the rotational velocity is 2 degrees per second and noise level is 0.5 times range resolution.(b) The estimated attitudes of multi-perspective when the rotational velocity is 10 degrees per second and noise level is 5 times range resolution.(c) The integrity of multi-perspective 3D reconstruction varies with the number of utilized perspectives.

Figure 15 .
Figure 15.Variation of RMSE of multi-perspective 3D reconstruction with the noise level and rotation velocity level.(a) Variation of RMSE with respect to noise level when rotation velocity is 2 degrees per second.(b) Variation of RMSE with respect to rotation velocity level per noise level.