Next Article in Journal
Considerations about the Determination of the Depolarization Calibration Profile of a Two-Telescope Lidar and Its Implications for Volume Depolarization Ratio Retrieval
Previous Article in Journal
Sensing and Storing the Blood Pressure Measure by Patients through A Platform and Mobile Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform

College of Electronic Science, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(6), 1806; https://doi.org/10.3390/s18061806
Submission received: 24 April 2018 / Revised: 22 May 2018 / Accepted: 30 May 2018 / Published: 4 June 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
In real applications, the image quality of the conventional monostatic Inverse Synthetic Aperture Radar (ISAR) for the maneuvering target is subject to the strong fluctuation of Radar Cross Section (RCS), as the target aspect varies enormously. Meanwhile, the maneuvering target introduces nonuniform rotation after translation motion compensation which degrades the imaging performance of the conventional Fourier Transform (FT)-based method in the cross-range dimension. In this paper, a method which combines the distributed ISAR technique and the Matching Fourier Transform (MFT) is proposed to overcome these problems. Firstly, according to the characteristics of the distributed ISAR, the multiple channel echoes of the nonuniform rotation target from different observation angles can be acquired. Then, by applying the MFT to the echo of each channel, the defocused problem of nonuniform rotation target which is inevitable by using the FT-based imaging method can be avoided. Finally, after preprocessing, scaling and rotation of all subimages, the noncoherent fusion image containing all the RCS information in all channels can be obtained. The accumulation coefficients of all subimages are calculated adaptively according to the their image qualities. Simulation and experimental data are used to validate the effectiveness of the proposed approach, and fusion image with improved recognizability can be obtained. Therefore, by using the distributed ISAR technique and MFT, subimages of high-maneuvering target from different observation angles can be obtained. Meanwhile, by employing the adaptive subimage fusion method, the RCS fluctuation can be alleviated and more recognizable final image can be obtained.

1. Introduction

Conventional monostatic ISAR images are obtained under the condition that the change of the radar observation angle is small (< 5 ° ) [1]. For modern high maneuvering aircraft with high speed, stealth and other characteristics, even a slight change of the observation angle can cause a fluctuation of 10 to 15 dB on the RCS [2]. The strong fluctuation of the RCS will cause a deterioration in the ISAR image quality. Therefore, the imaging quality of monostatic ISAR for a high-maneuvering target is easily influenced by the observation angle. Meanwhile, after translation motion compensation, the nonuniform rotation caused by the maneuvering motion will degrade the imaging performance of the conventional FT-based method.
Unlike the conventional monostatic ISAR, the distributed ISAR technique can utilize the data acquired from multiple observation angle to improve the image quality [3,4,5,6]. Each radar sensor is characterized by either transmitting capability or receiving capability. Moreover, the receiving sensors can receive and separate all the transmitting signals. Therefore, any transmitting sensor and any receiving sensor can form a transmitting/receiving channel (or can be considered to form an equivalent self-transmitting and self-receiving sensor). With an appropriate formation of the radar sensors, the target can be observed from multiple observation angles, which provides the ways to overcome the RCS fluctuation.
Recently, the research on the distributed ISAR is making rapid growth. Pastina et al. [3,4,5,6] analyzed the potential of the distributed ISAR to increase the cross-range resolution by exploiting multiple equivalent sensors to increase the global variation of the view angle. However, its effectiveness relies on the assumption that the change of the observation angle is small so that the range-compressed echoes of different channels have stable phases. However, in order to get the target image that overcomes the RCS fluctuation, the observation angles must be very different and the phase stability cannot be guaranteed. Thus, the proposed methods in [3,4,5] are not easy to apply in this scenario. Furthermore, for the high-maneuvering target which is inevitable in real scenarios, there is a nonuniform rotation after translation motion compensation. As a result, the change of observation angle of each equivalent sensor is no longer a linear function with respect to the slow time, which means that the combined view angle is not continuous and cannot provide better resolution. Thus, the methods proposed in [3,4,5] have some limitation for radar formations involving large variation among the individual perspectives and high-maneuvering targets.
Image fusion using subimages obtained from the distributed ISAR system is a solution to overcome the RCS flucation problem of the high maneuvering target. The distributed ISAR system enables the target to be observed from multiple perspectives, and the image fusion instead of echo fusion can reduce the limitation of the change of observation angle. References [7,8] acquire the fusion image after estimating the rotation rate and the bistatic angle from two subimages. Without more observation channels and more RCS information, the improvement of the image quality of the bistatic ISAR is limited. Reference [9] proposed a image fusion method for the uniform rotating target via distributed ISAR, but without consideration for the nonuniform rotating target.
In this work, the authors are interested in the potential of the distributed ISAR to acquire the image of the high-maneuvering target. Thus, the two aforementioned key problems must be considered, namely RCS fluctuation in different observation angles and target nonuniform rotation after translation compensation. For the first problem, the distributed ISAR technique is applied to obtain subimages from different observation angles with different RCS values. Then, the fusion image which contains all the RCS information and with improved quality can be acquired by the non-coherent fusion method.
For the second problem, it is proved that the radar echos can be modelled as a polynomial phase signal (PPS), and the Fourier transform is inappropriate for the azimuth focusing. To solve this problem, the Range-Instantaneous-Doppler (RID) algorithm has been proposed to improve the ISAR image quality, where the Fourier transform is substituted by the time-frequency representations (TFRs). The first approach of RID algorithm is based on the (TFRs) with high concentration and reduced cross-terms, such as the imaging methods based on Short Time Fourier Transform (STFT) [10], the Wigner-Ville distribution (WVD) [11] and so on. Though the RID algorithms performs well in terms of the computational efficiency, they still suffer from the tradeoff between the time frequency concentration and the cross-terms. Also, the obtained RID images are not very stable which will cause diffculty to the subimage fusion step. The second approach is based on the parameters estimation technique, the high quality ISAR images can be obtained by the parameters estimation of the coefficients of the PPS. The parameterized imaging methods (such as [12,13]) are effective for the enhancement of ISAR image. However, for the distributed ISAR system, it is very computational expensive to estimate the signal parameters of all range bins in echoes from all observation channels. However, the rotational parameters of the target can be regarded as invariable in the whole observation time, and the ratio of the rotational acceleration to rotational speed is also fixed. Therefore, estimating the ratio of rotational parameters in one or several range bins, rather than estimating signal parameters in all range bins, may greatly reduce the amount of computation. Wu [14] describes the rotational nonuniformity-relative angular acceleration (RAA) and relative angular jerk (RAJ), and with the estimated RAA and RAJ, rotational nonuniformity compensation is carried out. This method is effective to get the high quality ISAR image, but for the distributed ISAR, it is still cumbersome to construct compensation matrix for each equivalent sensor imaging. Thus, the MFT imaging method is used here. By estimating the rotation parameters, the ISAR images of all observation channels can be obtained directly through MFT.
This paper is organized as follows. After presenting the signal model of distributed ISAR in Section 2, the imaging method of the nonuniform rotation target based on MFT is introduced in Section 3. Then, both simulation and experimental results are presented to validate the effectiveness of the proposed method in Section 4. Finally, we conclude this paper in Section 5.

2. Distributed ISAR Echo Model

Consider a 3D coordinate system ( O , X , Y , Z ) with the origin in the target’s fulcrum, and the target’s motion can be decomposed into a translation of the fulcrum and a rotation of the target body. We assume here that any relative translation motion between the distributed sensors and the target fulcrum have been already compensated. Therefore, we can focus on the target rotation. To simply the processing algorithm, the dominant rotation around the vertical axis is considered only. Also, we model the target as a rigid body consisting of Q scatterers. The radar formation and the target rotation are shown in Figure 1.
The distributed ISAR system consists of M transmitting sensors and N receiving sensors as shown in Figure 1. m is used to represent the mth transmitting sensor and n is used to represent the nth receiving sensor. Each of them can placed in the ground and carry an antenna appropriately steered toward the moving target in the air. Also, another possible application is that each radar sensor is carried by an aircraft to observe the target on the ground. The detailed placement of the sensors will be introduced later.
Assume the M transmitting signals s m ( t ^ ) are orthogonal, and each receiving sensor has the ability to receive and separate the signals from different transmitting sensors. Thus, I = M N transmitting/receiving channels can be formed. Meanwhile, we assume that all sensors have achieved time synchronization to ensure accurate matching of the transmitting signal and receiving signal. After demodulation and range compression, the received backscattered signal in the m-nth observation channel is denoted as
S m n ( t ^ , t p ) = q = 1 Q σ m n , q p m ( t ^ R m n , q ( t p ) c ) e j 2 π λ R m n , q ( t p ) ,
where t ^ is the fast time, t p is the slow time, c is the wave velocity, λ is the carrier wavelength, σ m n , q is the scattering coefficient of the qth scatterer in the m-nth observation channel, and R m n , q ( t p ) is the propagation distance of the signal in the m-nth channel of the qth scatterer. p m ( t ) denotes the point spread function of s m ( t ^ ) . m { 1 , 2 , , M } , p m ( t ) p ( t ) where p ( t ) is the sinc function [15].
The position vector of the qth scatterer can be written as
r q ( t p ) = r q [ c o s κ q 0 sin ( θ q 0 + ϕ ( t p ) ) , c o s κ q 0 cos ( θ q 0 + ϕ ( t p ) ) , s i n κ q 0 ] ,
where r q is the distance of the scatterer from the fulcrum O, θ q 0 is the initial azimuth angle, κ q 0 is the elevation angle of the qth scatterer above the XOY plane, and ϕ ( t p ) is the rotation angle at slow time t p measured in clockwise. So, θ q 0 + ϕ ( t p ) is the real azimuth angle at t p .
The position unit vectors of the mth transmitting sensor and the nth receiving sensor are denoted as R 0 m and R 0 n :
R 0 m = R 0 m [ cos ψ m sin ζ m , cos ψ m cos ζ m , sin ψ m ] ;
R 0 n = R 0 n [ cos ψ n sin ζ n , cos ψ m cos ζ n , sin ψ n ] ,
where R 0 m and R 0 n are the distance between the fulcurm and the transmitting or receiving sensor, ζ n ( ζ m ) is the angle between the projection of the line connecting the receiving sensor (the transmitting sensor) and the target fulcrum on the XOY plane and the positive direction of Y-axis, which is drawn in Figure 1. ψ m and ψ n are grazing angle of the transmitting and receiving sensors. For the sake of simplicity, the grazing angles of all sensors are assumed to be the same as ψ 0 , which is reasonable when the sensors are not very far from each other and the target locates far away.
Therefore, under the far-field assumption, the propagation distance R m n , q ( t p ) can be expressed approximately as
R m n , q ( t p ) = R 0 m r q ( t p ) + R 0 n r q ( t p ) = R 0 m + R 0 n ( R 0 m R 0 m + R 0 n R 0 n ) r q ( t p ) = 2 ( R m n + r q cos κ q 0 cos ψ 0 cos ( θ q 0 + ϕ ( t p ) α m n ) cos β m n + sin ψ 0 sin κ q 0 ) ,
where R m n = ( R 0 m + R 0 n ) / 2 , α m n = ( ζ m + ζ n ) / 2 , and β m n = ( ζ m ζ n ) / 2 are the mean distance, mean angle, and the half difference angle of the m-nth transmitting/receiving channel pair, respectively [4].
The transmitting/receiving pair ( m , n ) can be regarded as the ith equivalent sensor. By setting α i = α m n and β i = β m n and neglecting the constant distance R m n under the assumption that the translation motion has been compensated, R m n , q ( t p ) can be written as
r i q ( t p ) = 2 r q ( cos κ q 0 cos ψ 0 cos ( θ q 0 + ϕ ( t p ) α i ) cos β i + sin ψ 0 sin κ q 0 )
Therefore, the received signal of the ith equivalent sensor can be expressed as
S i ( t ^ , t p ) = q = 1 Q σ i , q p [ t ^ r i q ( t p ) / c ] e j 2 π λ r i q ( t p ) .
Due to the spatial separation of each equivalent sensor, the imaging projection plane (IPP) may not be the same. Since each equivalent sensor can be regarded as working independently, and the analysis of them are similar, we only use the analysis of the ith equivalent sensor to illustrate. As shown in Figure 2 , since the target rotates around the Z-axis, the rotation vector ω is very simple as ω = ω [ 0 , 0 , 1 ] . R i represents the range unit vector of the ith IPP and is pointed from the fulcrum O to the ith equivalent sensor.
In the ISAR imaging, the azimuth of the IPP is the cross-product of the range unit vector and the effective rotation vector ω e i , so the red line in Figure 2 represents the direction of azimuth. The ith IPP is the plane containing R i and R i × ω e i .
When the IPPs are not the same, neither are the subimages of different observation angles. In order to make all the subimages in the same plane, it needs to project them into the unified IPP [6]. Therefore, we assume that ψ 0 = 0 , according to the rotation vector ω , the IPP is the XOY plane in Figure 1. However, we must declare that in the real scene, the radar sensors should be placed reasonably so that the IPPs can be approximated as the same plane. Once the radar formation does not meet the requirement, the method in this paper need some modification.
Based on the above assumption, the imaging of the target is the projection in that plane. In this paper, the IPP is the XOY plane in Figure 1. Although any value of α i and β i can be set at this case, in reality, we still have to rationalize the radar and limit the value of ψ 0 , α i and β i . At the same time, assuming that the target has been projected onto the XOY plane, then let κ q 0 = 0 , so r i q can be re-expressed as
r i q ( t p ) = 2 r q cos ( θ q 0 + ϕ ( t p ) α i ) cos β i
Equations (7) and (8) are the new echo expression, which will be analyzed to get the target’s image.
For the distributed ISAR system, the observation time is short and ϕ ( t p ) is small. By using approximations cos ϕ ( t p ) 1 and sin ϕ ( t p ) 0 in the range dimension, which are reasonable as the range error caused by these approximations is negigible compared with the range resolution, the compressed range of the qth scatterer can be expressed as
r i q = 2 r q cos ( θ q 0 α i ) cos β i .
While in the cross-range dimension, we use more accurate approximations sin ( ϕ ( t p ) ) ϕ ( t p ) , cos ( ϕ ( t p ) ) 1 . Based on these approximations, S i ( t ^ , t p ) can be rewritten as
S i ( t ^ , t p ) = q = 1 Q σ i , q p [ t ^ r i q / c ] · exp j 4 π r q cos ( θ q 0 α i ) cos β i / λ · exp j 4 π r q sin ( θ q 0 α i ) ϕ ( t p ) cos β i / λ .
In Equation (10), the first exponent term is a constant related to the scattering point and transmitting/receiving channel, which has no effect on the imaging result. The cross-range imaging information is contained in the second exponential phase. For nonuniform rotation target, ϕ ( t p ) is a polynomial function and S i ( t ^ , t p ) is a polynomial phase signal (PPS). If the cross-range compression is achieved by using the FT, the second order terms (corresponding to the rotational acceleration) or even higher order terms (corresponding to the high order rotational motion) of ϕ ( t p ) will make the image blurred in the cross-range dimension. To avoid this effect, instead of using FT-based imaging method, the MFT is used here for cross-range compression. The MFT is a generalization of the FT and can effectively deal with the PPS.

3. Image Fusion of Nonuniform Rotation Target

3.1. Single ISAR Imaging by Matching Fourier Transform

The MFT can focus signals with nonlinear phase changes [16,17]. Consider a continuous signal f ( t ) = A i e j ω i φ ( t ) with observation time [ 0 , T a ] , where φ ( t ) is the frequency modulation function. If φ ( t ) is monotonic bounded and φ ( 0 ) = 0 , the MFT of f ( t ) can be achieved by
F ( ω ) = 0 T a f ( t ) e j ω φ ( t ) d φ ( t ) ,
where ω is the MFT frequency.
As aforementioned, for a nonuniform rotation target, ϕ ( t p ) is a polynomial function. We use ω 0 , ω 1 , ⋯ to represent the different order components of the rotational angular velocity, ω 0 represents the uniform rotational component, ω 1 represents the first-order rotational acceleration and so on. Therefore, the rotation angle ϕ ( t p ) can be expanded as
ϕ ( t p ) = d = 1 ω d 1 t p d ω d 1 t p d d ! d ! = w 0 ( d = 1 e d 1 t p d e d 1 t p d d ! d ! ) = w 0 ϑ ( t p ) ,
where e d 1 = w d 1 / w 0 ( e 0 = 1 ). ϑ ( t p ) is determined by the target rotation characteristics and is the same for all scatterers.
Substituting (12) into (10) and neglecting the first constant term in (10), the cross-range signal in one range bin can be rewritten as
S i ( t p ) = q = 1 Q σ i , q [ exp [ j 4 π x i q ω 0 ϑ ( t p ) / λ ] ,
and
x i q = r q sin ( θ q 0 α i ) cos β i
is the equivalent cross-range position.
It is clear that (13) is the sum of plurality of signals with the same frequency modulation function ϑ ( t p ) and ϑ ( t p ) is the monotonic bounded function with ϑ ( 0 ) = 0 , which meets the definition of f ( t ) . Thus, with the estimation of e d 1 , the MFT can be applied to S i ( t p ) . T is defined as the observation time, so the MFT expression is:
S i ( ω ) = 0 T q = 1 n q σ i , q exp { j ( 4 π x i q ω 0 / λ + ω ) ϑ ( t p ) } d ϑ ( t p ) .
Using the linear properties of the MFT and letting f d = ω ω 2 π 2 π , we can get
S i ( f d ) = q = 1 n q σ i , q ϑ ( T ) sinc [ ϑ ( T ) ( f d 2 x i q ω 0 / λ ) ] exp { j π ϑ ( T ) ( f d + 2 x i q ω 0 / λ ) } .
S i ( f d ) is a set of narrow sinc pulses after the MFT, and the equivalent cross-range position of the scattering point can be calculated from the peak position f d q of the pulse with the formula x i q = f d q λ / 2 ω 0 . Thus, the scatters are all focused both in range and cross range dimension. The width of the sinc pulse is 1 / ϑ ( T ) , and the equivalent cross-range resolution is ρ e = λ / 2 ω 0 ϑ ( T ) = λ / 2 ϕ ( T ) . Consider two scatterers in the same range bin, i.e., their coordinate difference is Δ x , 0 . By expanding x i q as ( x q 0 cos α i y q 0 sin α i ) cos β i ( where x q 0 = r q sin θ q 0 , y q 0 = r q cos θ q 0 ), it is easy to get that the distance of these two scatterers in the image is Δ x cos α i cos β i . Thus, in order to resolve these two scatterers, the following condition should be satisfied
Δ x cos α i cos β i > λ / 2 ϕ ( T ) .
Obviously, after obtaining the estimation of the ratio e d 1 of the target rotation parameters, MFT is very succinct and can directly obtain the azimuthal focused results for all range bins. This article assumes that e d 1 has been estimated by other estimation methods. When the target’s maneuvering is not severe, the PPS model degrades into an LFM signal. Accoding to previous studies, fractional Fourier transform (FrFT) [18], Radon-Wigner transform [19], adaptive Chirplet decomposition [20] and centroid frequency-chirp rate distribution(CFCRD) [21] are effective approaches for parameter estimation of LFM signals. When the target’s maneuveirng is severe, several algorithms for cubic coefficient estimation can be retrieved, such as higher-order ambiguity function-integrated cubic phase function [22], scaled Fourier transform [23], keystone time-chirp rate distribution (KTCRD) [24]. Meanwhile, the rotational parameters can also be estimated according to the echoes from multi-channels [1,25,26].

3.2. Subimage Fusion

By applying the range compression and the MFT to the echoes of all transmitting/receiving channels, all subimages can be obtained. Due to the different values of RCS in different transmitting/receiving channels, the quality of these subimages has large fluctuations. In some subimages, the scatterers can be clearly distinguished. While in other subimages, some scatterers are submerged in noise. To obtain a stable imaging result, accumulation of all subimages is performed in the image domain.

3.2.1. Subimage Resampling

For subsequent image processing, the first operation is to equalize the sampling grid in the range and cross-range dimension. Denote the sampling grid of the original image as Δ l s , Δ l a , where Δ l s and Δ l a are the range sampling interval and the cross-range sampling interval:
Δ l s = c / 2 f s ;
Δ l a = λ P R F / 2 w N f ,
where f s is the sampling frequency in range dimension, PRF is the pulse repetition frequency, and N f is the number of points of the MFT.
Denote the sampling grid of the image after resampling as Δ l s , Δ l a . According to the relation between Δ l s and Δ l a , in order to make Δ l s = Δ l a , the original image can be up-sampled or down sampled in the range dimension or cross-range dimension.

3.2.2. Subimage Scaling and Rotation

Use y i q to reformulate the range compression position r i q of the qth scatterer in the ith equivalent sensor, according to the expression of r i q , y i q can be expressed as
y i q = r q cos ( θ q 0 α i ) cos β i = ( x q 0 sin α i + y q 0 cos α i ) cos β i .
After the MFT, the cross-range compression position can be expressed as
x i q = r q sin ( θ q 0 α i ) cos β i = ( x q 0 cos α i y q 0 sin α i ) cos β i .
Therefore, the imaging position of a scatterer ( x q 0 , y q 0 ) in the ith subimage is ( x i q , y i q ) . ( x i q , y i q ) is related to α i and β i , which are different from a subimage to another subimage. To combine all subimages, the influence of α i , β i must be removed so that the imaging positions of the same scatterer in different subimages are aligned. By examing (20) and (21), the relationship between ( x q 0 , y q 0 ) and ( x i q , y i q ) can be represented by the rotation and scaling of the coordinate system as shown in Figure 3, which can be expressed as
x i q y i q 1 = cos β i cos α i sin α i 0 sin α i cos α i 0 0 0 1 x q 0 y q 0 1 .
Thus, the alignment of all subimages can be achieved by applying an inverse scaling transform and an inverse rotation transform. The image transform is carried out in the homogeneous coordinate system. Assume the subimage I i has G p i x pixels where G p i x = L × K (L, K means the sampling length in range and cross-range dimension). The homogeneous coordinate of the ( l , k ) pixel is represented as i i l k = [ l , k , 1 ] T . By stacking the column vectors { i i l k } , l = 1 , 2 , , L , k = 1 , 2 , , K in the row dimension, the coordinates of I i is converted into a 3 × G p i x dimensional matrix. Therefore, the corrected subimage matrix I i can be expressed as I i = T r T s I i [27], where the scaling matrix T s and rotation matrix T r are expressed as
T s = cos β i 1 0 0 0 cos β i 1 0 0 0 1 ,
T r = cos α i sin α i 0 sin α i cos α i 0 0 0 1 .
After scaling and rotation, the coordinates of some pixel points of the subimage may not be integers, and the values are not defined for such coordinates which should be estimated from their neighbors. To achieve this, the bilinear interpolation algorithm for image is adopted here.

3.2.3. Subimage Fusion

After the above processing, each scattering point is located at the same pixel in all subimages. Therefore, the final image can be obtained by the accumulation of all the subimages. The weight coefficients of subimages are calculated adaptively according to the image qualities of different subimages. Herein, the entropy is used as a quality metric.
The entropy of subimage I i [28,29,30] is defined as
E p = l = 1 L k = 1 K D ( l , k ) ln [ D ( l , k ) ] ,
where D ( l , k ) = d ( l , k ) / l = 1 L k = 1 K d ( l , k ) , d ( l , k ) is the value of the pixel ( l , k ) . The image entropy reflects the sharpness of the image, and the image with a small entropy value is clearer. Therefore, the image with a smaller entropy value is given more weight coefficients. The final fusion image can be accumulated by
I f = i = 1 I I i / E p i .
Figure 4 shows the complete processing chain. Assume the translation motion of the target has been compensated, and after range compression and neglecting the range migration, the MFT is used for each equivalent channel’s echo to obtain the subimage. After the preprocessing, scaling and rotating of all subimages, the fusion image can be acquired by accumulating of all the subimages with coefficients determined by the subimages entropies.

4. Simulation and Experimental Results

In this section, simulations are conducted to demonstrate the effectiveness of the proposed method firstly. The distributed ISAR system are composed of four transmitting sensors and four receiving sensors (i.e., M = 4 , N = 4 ). Therefore, sixteen equivalent sensors can be obtained.

4.1. Target Model and Echo Analysis

The transmitting signal is a set of orthogonal signals with the same center frequency (10 GHz) and the same bandwidth (300 MHz) which can achieve a range resolution of 0.5 m. Assume the target rotates nonuniformly with an angular speed of 0.01 rad/s and an acceleration rate of 0.04 rad / s 2 , so we have e 1 is 4. The accumulation time is 0.5 s to achieve a cross-range resolution of 1.5 m. The target is composed of eleven scatterers (see Figure 5a) which are isotropic and independent with each other and obey the Swerling I distribution model. Under this model, each scatterer has the same RCS value in the same observation channel, but it is distributed identically across all channels. Setting the noise power to ς n 2 , for a given signal to noise ratio (SNR), the signal power ς s 2 can be calculated by ς s 2 = ς n 2 · 10 S N R / 10 . I random numbers σ i that satisfy the Rayleigh distribution with ς s as the standard deviation parameter are generated. Then the RCS value of the ith observation channel is set to σ i [9].
Figure 5b shows the range compressed echoes of all sixteen equivalent senors. Each equivalent sensor has 101 slow time sampling points, and there are 16 pieces of echo data. According to (20), the range compression position of each scatterer is related to α i and β i . Since the α i and β i are different from each other, the cross-range signal of each scatterer point spreads over different range bins and there are range jump between different equivalent sensors. Therefore, the observation angle in one range bin cannot be increased by directly splicing the cross-range signal from different equivalent sensors.
As the RCS fluctuation of high-maneuvering target and the nonuniform rotation are the main problem that affects the image quality, we mainly consider overcoming these problems by imaging the nonuniform rotating target from multi-channels.

4.2. Comparision of FT and MFT

Assume we have already estimated the ratio of rotation parameters by parametric estimation methods [18,19,20,21,22,23,24], or we can estimate the rotation parameters w 0 , w 1 [1,25,26] to calculate e 1 . Figure 6a,b show the subimages (without noise) of the twelfth equivalent channel with mean angle α i = 0 ° and half difference angle β i = 18 ° , obtained from the FT and the MFT, respectively.
Figure 6c,d are the cross-range profiles of the range bin of (a) and (b) at range 0 m . There are 5 scatterers in the range bin. It is evident that the MFT has better performance than the FT when dealing with nonuniform rotation target. Without a prior knowledge, it is impossible to recognize the 5 scatterers in Figure 6a. Compared with Figure 6b, the scatterers in Figure 6a have wider mainlobe and the power of their sidelobes are comparable with that of the mainlobes. Thus, the resolution of FT in this case is limited and it is easy subject to the sidelobe interference. Conversely, the cross-range profile obtained by the MFT is easily recognized and has lower sidelobe (about 25 dB ).
What needs to be pointed out here is that when the estimation of ϑ ( t p ) in (13) is inaccurate, there will be errors in the MFT results. However, as long as the error of ϑ ( t p ) is not too large, acceptable results can be still obtained by applying MFT.

4.3. Subimage Fuison

After applying MFT to range compressed echoes of all observation channels, the target images from different observation angles can be obtained. Figure 7 are the subimages from all the observation angles ( S N R = 8 dB, ς n 2 = 4 ), and each subimage has a different degree of scaling and rotation.
Due to the RCS fluctuation, in some subimages (like (h),(i),(j)), the scattering points can be easily identified, while this is not true for other subimages (like (b),(k),(l)). The image quality of the corrected subimages can be judged directly from each subimage, and they are not drawn here. Figure 8 is the fusion image, in which the scattering points can be more clearly recognized. Table 1 shows the entropies of all subimages and the final fusion image. We can see that the final fusion image has the smallest entropy and the best image quality.
To further analyze the performance of image fusion, the cross-range profiles of the range bin at range 0 m are drawn. Three corrected subimages are selected. Figure 9a–c represents the cross-range profiles of corrected subimages of (b),(n),(o) in Figure 7. The noise of Figure 9a is very large, and the positions of the scattering points are completely indistinguishable. The scatterer positions in Figure 9b,c can be distinguished, but the noise fluctuations in (b) are still very obvious. From the fusion image in Figure 9d, it can be seen that the noise power has been reduced significantly and the positions of scatterers are clearly distinguishable. Although the non-coherent fusion of the subimages does not improve the image resolution, the image quality after the fusion is indeed improved.

4.4. Application to Live Data

The aforementioned technique has been applied to the measured data of the conical target in the microwave anechoic chamber. The target is placed in the center of the turntable and remain motionless. The radar rotates around the target with the turntable and emits 8 GHz to 10 GHz stepped frequency modulated (SFM) signal with step of 20 MHz. The size and the shape of the target are shown in Figure 10 and the initial radar LOS is indicated by the dotted line. The PRF is 400 Hz. We simulate multiple equivalent sensors simultaneously observe the target by selecting radar echoes from different observation angles. In addition, we non-uniformly select the slow time sampling points to simulate nonuniform rotation of the target. Meanwhile, we add different levels of noise to the echoes of different observation channels to simulate the fluctuation of the background noise.
After range compression and MFT, we obtain imaging results from five different observation angles, that is [ 10 ° , 5 ° , 0 ° , 5 ° , 10 ° ] . Figure 11a–e show the target with different rotation angles (since the half-difference angle of each observation angle is zero in this experiment, there is no scaling). Meanwhile, it also shows that in some observation angles, due to the mutual obstructions between scatterers, not all scatterers can be identified. Figure 11f is the non-coherent fused image, according to Table 2, the fused image has the smallest entropy and makes the shape of the target clearly visible from noise.

5. Conclusions

The distributed ISAR technique can utilize the data acquired from multiple observation angles. In this paper, a method which combines the distributed ISAR technique and the MFT is proposed to obtain the image of a high-maneuvering target. Two main problems, i.e., the RCS fluctuation and nonuniform rotation of high-maneuvering target, are solved.
In this paper, we assume that all IPPs are the same, which simplifies the processing chain. It should be pointed out that in order to satisfy this requirement, the radar sensors should not be placed far apart from each other in real scene. Based on the assumption, the multiple channel echoes of nonuniform rotation target are acquired from different observation angles firstly. Secondly, using the MFT for all channel echoes can avoid the azimuthal defocusing problem caused by the FT and get well-focused subimages. At the same time, by estimating the rotation parameters once, the MFT can apply directly to all the range bins of all echoes from different observation channel, which is computational effective. Thirdly, after the processing and accumulating of all subimages, the final fusion image can be acquired. To reduce the influence of RCS fluctuations, the accumulation coefficients are determined adaptively according to the subimage entropies. The simulations and experiment show the effectiveness of the proposed method to overcome the RCS fluctuation and can result in a final image with improved quality.
Thus, the innovativeness and contribution of this article are:
  • Based on the characteristics of the high-maneuvering target, the distributed ISAR technique is used to observe the target from multi-channels with different RCS values.
  • The MFT is applied to the echo of each channel to acquire well-focused subimages, which is computational efficient compared with other imaging methods.
  • Subimages fusion with adaptive coefficients calculated according to subimage entropies can effectively overcome the RCS fluctuations.
In this paper, the rotation around the Z-axis is considered only. However, in real scene, the target always has three-dimensional rotation which is more complicated. In this case, obtaining stable and recognized ISAR image is challenge. Meanwhile, the IPPs are assumed to be the same in this paper. If the radar formation is not strictly limited, this assumption does not hold. Therefore, the subimages from different observation channels cannot be accumulated directly. Further research is necessary to increase the applicability of the method in this paper. Another research area of interest is how to use the raw data of each channel to get a ISAR image with higher resolution when the target rotates nonuniformly.

Author Contributions

Y.L. and W.Z. conceived and designed the experiments. Y.L. wrote the paper. Y.F. provided valuable discussions, substantially help as well as giving important comments for improving the presentation.

Acknowledgments

The authors acknowledge fruitful discussions with Guanhua Zhao.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Santi, F.; Pastina, D.; Bucciarelli, M. Estimation of ship Dynamics with a Multi-Platform Radar Imaging System. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2769–2788. [Google Scholar] [CrossRef]
  2. Duan, H.; Zhang, L.; Fang, J.; Huang, L.; Li, H. Pattern-Coupled Sparse Bayesian Learning for Inverse Synthetic Aperture Radar Imaging. IEEE Signal Proc. Lett. 2015, 22, 1995–1999. [Google Scholar] [CrossRef]
  3. Pastina, D.; Bucciarelli, M.; Lombardo, P. Multi-Platform ISAR for Flying Formation. In Proceedings of the IEEE Radar Conference, Pasadena, CA, USA, 4–8 May 2009; pp. 1–6. [Google Scholar]
  4. Pastina, D.; Bucciarelli, M.; Lombardo, P. Multistatic and MIMO Distributed ISAR for Enhanced Cross-Range Resolution of Rotating Targets. IEEE Trans. Geosci. Remote Sens. 2010, 4, 3300–3317. [Google Scholar] [CrossRef]
  5. Pastina, D.; Santi, F.; Bucciarelli, M. MIMO Distributed Imaging of Rotating Targets for Improved 2-D Resolution. IEEE Geosci. Remote Sens. Lett. 2015, 7, 190–194. [Google Scholar] [CrossRef]
  6. Bucciarelli, M.; Pastina, D. Distributed ISAR focusing for targets undergoing 3D motion. In Proceedings of the IET International Conference on Radar Systems, Glasgow, UK, 22–25 October 2012. [Google Scholar]
  7. Xu, R.; Shao, P.; Li, Y.; Xing, M. Bistatic ISAR Image Fusion with Sub-Aperture Based Parameter Estimation. In Proceedings of the IET International Radar Conference, Xi’an, China, 14–16 April 2013. [Google Scholar]
  8. Yeh, C.M.; Xu, J.; Peng, Y.N.; Wang, X.T. ISAR Image Fusion with Two Separated Aspect Observations. In Proceedings of the IEEE Radar Conference, Pasadena, CA, USA, 4–8 May 2009. [Google Scholar]
  9. Gu, W.; Wang, D.; Ma, X. Distributed MIMO-ISAR Sub-image Fusion Method. J. Radars 2017, 6, 90–97. [Google Scholar]
  10. Xia, X.G.; Wang, G.Y.; Chen, V.C. Quantitative SNR analysis for ISAR imaging using joint time-frequency analysis-Short Time Fourier Transform. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 649–659. [Google Scholar]
  11. Wu., Y.; Munson, D.C. Wide-Angel ISAR Passive Imaging Using Smoothed Pseudo Wigner-Ville Distribution. In Proceedings of the IEEE Radar Conference, Atlanta, GA, USA, 3 May 2001. [Google Scholar]
  12. Wang, Y.; Lin, Y. ISAR Imaging of Non-Uniformly Rotating Target via Range-Instantenous- Doppler-Derivatives Algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 167–176. [Google Scholar] [CrossRef]
  13. Wang, Y.; Abdelkader, A.C.; Zhao, B.; Wang, J. ISAR Imaging of Maneuvering Targets Based on the Modified Discrete Polynomial-Phase Transform. Sensors 2015, 15, 22401–22418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Wu, W.Z.; Xu, S.Y.; Hu, P.J.; Zou, J.W.; Chen, Z.P. Inverse Synthetic Aperture Radar Imaging of Targets with Complex Motion based on Optimized Non-Uniformly Rotation Tranform. Remote Sens. 2018, 10, 593. [Google Scholar] [CrossRef]
  15. Zhu, Y.; Su, Y.; Yu, W. An ISAR Imaging Method Based on MIMO Technique. IEEE Trans. Geosci. Remote Sens. 2010, 48, 3290–3299. [Google Scholar]
  16. Wang, S.L.; Li, S.G.; Ni, J.L.; Zhang, G.Y. A New transform-Match Fourier transform. Acta Electron. Sin. 2001, 29, 403–405. [Google Scholar]
  17. Huang, Y.J.; Cao, M.; Fu, Y.W.; Li, Y.N.; Jiang, W.D. ISAR imaging of equably accelerative rotating targets based on matching Fourier transform. Signal Proc. 2009, 25, 864–867. [Google Scholar]
  18. Almeida, L.B. The fractional fouier transform and time-frequency representations. IEEE Trans. Signal Proc. 1994, 42, 3084–3091. [Google Scholar] [CrossRef]
  19. Wood, C.; Barry, T. Radon transformation of time-frequency distributions for analysis of multicomponent signals. IEEE Trans. Signal Proc. 1994, 42, 3166–3177. [Google Scholar] [CrossRef]
  20. Li, J.; Ling, H. Application of adaptive chirplet representation for ISAR feature extraction from targets with rotating parts. IEEE Proc. Radar Sonar Navig. 2003, 150, 284–291. [Google Scholar] [CrossRef]
  21. Zheng, J. Research on the ISAR Imaging Based on the Non-Searching Estimation Technique of Motion Parameters. Charpter 2. Ph.D. Thesis, Xidian University, Xi’an, China, 2014. [Google Scholar]
  22. Wang, Y. Inverse synthetic aperture radar imaging of maneuvering target based on range-instantaneous-doppler and range-instantaneous-chirp-rate algorithms. IET Rradar Sonar Navig. 2012, 6, 921–928. [Google Scholar] [CrossRef]
  23. Bai, X.; Tao, R.; Wang, Z.J.; Wang, Y. ISAR imaging of a ship target based on parameter estimation of multicomponent quadratic frequecy-modulated signals. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1418–1429. [Google Scholar] [CrossRef]
  24. Zheng, J.; Su, T.; Zhu, W.T.; Liu, Q.H. ISAR imaging of targets with complex motions based on the keystone time-chirp rate distribution. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1275–1279. [Google Scholar] [CrossRef]
  25. Pastina, D.; Bucciarelli, M.; Spina, C. Multi-Sensor Rotation Motion Estimation for Distrituted ISAR Target Imaging. In Proceedings of the IEEE Radar Conference, Rome, Italy, 30 September–2 October 2009. [Google Scholar]
  26. Santi, F.; Bucciarelli, M.; Pastina, D. Target Rotation Motion Estimation from Distributed ISAR Data. In Proceedings of the IEEE Radar Conference, Atlanta, GA, USA, 7–11 May 2012. [Google Scholar]
  27. Zhang, Y.J. Imaging Engineering; Tsinghua University Press: Beijing, China, 2013; pp. 52–78. [Google Scholar]
  28. Wang, G.; Bao, Z. The Minimum Entropy Criterion of Range Alignment in ISAR Motion Compensation. In Proceedings of the IET Radar, Edinburgh, UK, 14–16 October 1997; pp. 236–239. [Google Scholar]
  29. Qiu, X.H.; Zhao, Y.; Udpa, S. Phase Compensation for ISAR Imaging Combined with Entropy Principle. In Proceedings of the IEEE Antennas and Propagation Society International Symposium, Columbus, OH, USA, 22–27 June 2003; pp. 195–198. [Google Scholar]
  30. Lazarov, A.D.; Minchev, C.N. ISAR Signal Modeling and Image Reconstruction with Entropy Minimization Autofocusing. In Proceedings of the IEEE Digital Avionics Systems Conference, Portland, OR, USA, 15–19 October 2006. [Google Scholar]
Figure 1. Geometry of the distributed ISAR.
Figure 1. Geometry of the distributed ISAR.
Sensors 18 01806 g001
Figure 2. The ith IPP.
Figure 2. The ith IPP.
Sensors 18 01806 g002
Figure 3. The rotation relationship of subimage.
Figure 3. The rotation relationship of subimage.
Sensors 18 01806 g003
Figure 4. The complete processing chain.
Figure 4. The complete processing chain.
Sensors 18 01806 g004
Figure 5. (a) Target model; (b) Echoes of all equivalent sensors.
Figure 5. (a) Target model; (b) Echoes of all equivalent sensors.
Sensors 18 01806 g005
Figure 6. (a) Subimage by FT; (b) Subimage by MFT; (c) The profile of one range bin in (a); (d) The profile of one range bin in (b).
Figure 6. (a) Subimage by FT; (b) Subimage by MFT; (c) The profile of one range bin in (a); (d) The profile of one range bin in (b).
Sensors 18 01806 g006
Figure 7. (ap) Subimages of sixteen observation angles.
Figure 7. (ap) Subimages of sixteen observation angles.
Sensors 18 01806 g007
Figure 8. The fusion image.
Figure 8. The fusion image.
Sensors 18 01806 g008
Figure 9. The profile of one range bin in (ac) Three corrected subimages; (d) the fusion image .
Figure 9. The profile of one range bin in (ac) Three corrected subimages; (d) the fusion image .
Sensors 18 01806 g009
Figure 10. The size and the shape of the conical target.
Figure 10. The size and the shape of the conical target.
Sensors 18 01806 g010
Figure 11. The conical target (ae) subimages from different observation angles; (f) The fusion image.
Figure 11. The conical target (ae) subimages from different observation angles; (f) The fusion image.
Sensors 18 01806 g011
Table 1. The image entropies of subimages and final fusion image ( S N R = 8 dB, ς n 2 = 4 ).
Table 1. The image entropies of subimages and final fusion image ( S N R = 8 dB, ς n 2 = 4 ).
E p 1 E p 2 E p 3 E p 4 E p 5 E p 6
5.69346.89585.96865.91415.83116.8305
E p 7 E p 8 E p 9 E p 10 E p 11 E p 12
6.26745.47325.41625.33196.94256.8433
E p 13 E p 14 E p 15 E p 16 E p f i n a l
5.87976.78765.57066.08955.4518
Table 2. The image entropies of subimages and fusion image of the conical target.
Table 2. The image entropies of subimages and fusion image of the conical target.
E p 1 E p 2 E p 3 E p 4 E p 5 E p f i n a l
5.49065.01204.62894.55694.43044.1634

Share and Cite

MDPI and ACS Style

Li, Y.; Fu, Y.; Zhang, W. Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform. Sensors 2018, 18, 1806. https://doi.org/10.3390/s18061806

AMA Style

Li Y, Fu Y, Zhang W. Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform. Sensors. 2018; 18(6):1806. https://doi.org/10.3390/s18061806

Chicago/Turabian Style

Li, Yuanyuan, Yaowen Fu, and Wenpeng Zhang. 2018. "Distributed ISAR Subimage Fusion of Nonuniform Rotating Target Based on Matching Fourier Transform" Sensors 18, no. 6: 1806. https://doi.org/10.3390/s18061806

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop