1. Introduction
Conventional monostatic ISAR images are obtained under the condition that the change of the radar observation angle is small (<
) [
1]. For modern high maneuvering aircraft with high speed, stealth and other characteristics, even a slight change of the observation angle can cause a fluctuation of 10 to 15 dB on the RCS [
2]. The strong fluctuation of the RCS will cause a deterioration in the ISAR image quality. Therefore, the imaging quality of monostatic ISAR for a high-maneuvering target is easily influenced by the observation angle. Meanwhile, after translation motion compensation, the nonuniform rotation caused by the maneuvering motion will degrade the imaging performance of the conventional FT-based method.
Unlike the conventional monostatic ISAR, the distributed ISAR technique can utilize the data acquired from multiple observation angle to improve the image quality [
3,
4,
5,
6]. Each radar sensor is characterized by either transmitting capability or receiving capability. Moreover, the receiving sensors can receive and separate all the transmitting signals. Therefore, any transmitting sensor and any receiving sensor can form a transmitting/receiving channel (or can be considered to form an equivalent self-transmitting and self-receiving sensor). With an appropriate formation of the radar sensors, the target can be observed from multiple observation angles, which provides the ways to overcome the RCS fluctuation.
Recently, the research on the distributed ISAR is making rapid growth. Pastina et al. [
3,
4,
5,
6] analyzed the potential of the distributed ISAR to increase the cross-range resolution by exploiting multiple equivalent sensors to increase the global variation of the view angle. However, its effectiveness relies on the assumption that the change of the observation angle is small so that the range-compressed echoes of different channels have stable phases. However, in order to get the target image that overcomes the RCS fluctuation, the observation angles must be very different and the phase stability cannot be guaranteed. Thus, the proposed methods in [
3,
4,
5] are not easy to apply in this scenario. Furthermore, for the high-maneuvering target which is inevitable in real scenarios, there is a nonuniform rotation after translation motion compensation. As a result, the change of observation angle of each equivalent sensor is no longer a linear function with respect to the slow time, which means that the combined view angle is not continuous and cannot provide better resolution. Thus, the methods proposed in [
3,
4,
5] have some limitation for radar formations involving large variation among the individual perspectives and high-maneuvering targets.
Image fusion using subimages obtained from the distributed ISAR system is a solution to overcome the RCS flucation problem of the high maneuvering target. The distributed ISAR system enables the target to be observed from multiple perspectives, and the image fusion instead of echo fusion can reduce the limitation of the change of observation angle. References [
7,
8] acquire the fusion image after estimating the rotation rate and the bistatic angle from two subimages. Without more observation channels and more RCS information, the improvement of the image quality of the bistatic ISAR is limited. Reference [
9] proposed a image fusion method for the uniform rotating target via distributed ISAR, but without consideration for the nonuniform rotating target.
In this work, the authors are interested in the potential of the distributed ISAR to acquire the image of the high-maneuvering target. Thus, the two aforementioned key problems must be considered, namely RCS fluctuation in different observation angles and target nonuniform rotation after translation compensation. For the first problem, the distributed ISAR technique is applied to obtain subimages from different observation angles with different RCS values. Then, the fusion image which contains all the RCS information and with improved quality can be acquired by the non-coherent fusion method.
For the second problem, it is proved that the radar echos can be modelled as a polynomial phase signal (PPS), and the Fourier transform is inappropriate for the azimuth focusing. To solve this problem, the Range-Instantaneous-Doppler (RID) algorithm has been proposed to improve the ISAR image quality, where the Fourier transform is substituted by the time-frequency representations (TFRs). The first approach of RID algorithm is based on the (TFRs) with high concentration and reduced cross-terms, such as the imaging methods based on Short Time Fourier Transform (STFT) [
10], the Wigner-Ville distribution (WVD) [
11] and so on. Though the RID algorithms performs well in terms of the computational efficiency, they still suffer from the tradeoff between the time frequency concentration and the cross-terms. Also, the obtained RID images are not very stable which will cause diffculty to the subimage fusion step. The second approach is based on the parameters estimation technique, the high quality ISAR images can be obtained by the parameters estimation of the coefficients of the PPS. The parameterized imaging methods (such as [
12,
13]) are effective for the enhancement of ISAR image. However, for the distributed ISAR system, it is very computational expensive to estimate the signal parameters of all range bins in echoes from all observation channels. However, the rotational parameters of the target can be regarded as invariable in the whole observation time, and the ratio of the rotational acceleration to rotational speed is also fixed. Therefore, estimating the ratio of rotational parameters in one or several range bins, rather than estimating signal parameters in all range bins, may greatly reduce the amount of computation. Wu [
14] describes the rotational nonuniformity-relative angular acceleration (RAA) and relative angular jerk (RAJ), and with the estimated RAA and RAJ, rotational nonuniformity compensation is carried out. This method is effective to get the high quality ISAR image, but for the distributed ISAR, it is still cumbersome to construct compensation matrix for each equivalent sensor imaging. Thus, the MFT imaging method is used here. By estimating the rotation parameters, the ISAR images of all observation channels can be obtained directly through MFT.
This paper is organized as follows. After presenting the signal model of distributed ISAR in
Section 2, the imaging method of the nonuniform rotation target based on MFT is introduced in
Section 3. Then, both simulation and experimental results are presented to validate the effectiveness of the proposed method in
Section 4. Finally, we conclude this paper in
Section 5.
2. Distributed ISAR Echo Model
Consider a 3D coordinate system
with the origin in the target’s fulcrum, and the target’s motion can be decomposed into a translation of the fulcrum and a rotation of the target body. We assume here that any relative translation motion between the distributed sensors and the target fulcrum have been already compensated. Therefore, we can focus on the target rotation. To simply the processing algorithm, the dominant rotation around the vertical axis is considered only. Also, we model the target as a rigid body consisting of
Q scatterers. The radar formation and the target rotation are shown in
Figure 1.
The distributed ISAR system consists of
M transmitting sensors and
N receiving sensors as shown in
Figure 1.
m is used to represent the
mth transmitting sensor and
n is used to represent the
nth receiving sensor. Each of them can placed in the ground and carry an antenna appropriately steered toward the moving target in the air. Also, another possible application is that each radar sensor is carried by an aircraft to observe the target on the ground. The detailed placement of the sensors will be introduced later.
Assume the
M transmitting signals
are orthogonal, and each receiving sensor has the ability to receive and separate the signals from different transmitting sensors. Thus,
transmitting/receiving channels can be formed. Meanwhile, we assume that all sensors have achieved time synchronization to ensure accurate matching of the transmitting signal and receiving signal. After demodulation and range compression, the received backscattered signal in the
m-nth observation channel is denoted as
where
is the fast time,
is the slow time,
c is the wave velocity,
is the carrier wavelength,
is the scattering coefficient of the
qth scatterer in the
m-nth observation channel, and
is the propagation distance of the signal in the
m-nth channel of the
qth scatterer.
denotes the point spread function of
.
where
is the sinc function [
15].
The position vector of the
qth scatterer can be written as
where
is the distance of the scatterer from the fulcrum
O,
is the initial azimuth angle,
is the elevation angle of the
qth scatterer above the XOY plane, and
is the rotation angle at slow time
measured in clockwise. So,
is the real azimuth angle at
.
The position unit vectors of the
mth transmitting sensor and the
nth receiving sensor are denoted as
and
:
where
and
are the distance between the fulcurm and the transmitting or receiving sensor,
(
) is the angle between the projection of the line connecting the receiving sensor (the transmitting sensor) and the target fulcrum on the XOY plane and the positive direction of
Y-axis, which is drawn in
Figure 1.
and
are grazing angle of the transmitting and receiving sensors. For the sake of simplicity, the grazing angles of all sensors are assumed to be the same as
, which is reasonable when the sensors are not very far from each other and the target locates far away.
Therefore, under the far-field assumption, the propagation distance
can be expressed approximately as
where
,
, and
are the mean distance, mean angle, and the half difference angle of the
m-nth transmitting/receiving channel pair, respectively [
4].
The transmitting/receiving pair
can be regarded as the
ith equivalent sensor. By setting
and
and neglecting the constant distance
under the assumption that the translation motion has been compensated,
can be written as
Therefore, the received signal of the
ith equivalent sensor can be expressed as
Due to the spatial separation of each equivalent sensor, the imaging projection plane (IPP) may not be the same. Since each equivalent sensor can be regarded as working independently, and the analysis of them are similar, we only use the analysis of the
ith equivalent sensor to illustrate. As shown in
Figure 2 , since the target rotates around the Z-axis, the rotation vector
is very simple as
.
represents the range unit vector of the
ith IPP and is pointed from the fulcrum
O to the
ith equivalent sensor.
In the ISAR imaging, the azimuth of the IPP is the cross-product of the range unit vector and the effective rotation vector
, so the red line in
Figure 2 represents the direction of azimuth. The
ith IPP is the plane containing
and
.
When the IPPs are not the same, neither are the subimages of different observation angles. In order to make all the subimages in the same plane, it needs to project them into the unified IPP [
6]. Therefore, we assume that
, according to the rotation vector
, the IPP is the XOY plane in
Figure 1. However, we must declare that in the real scene, the radar sensors should be placed reasonably so that the IPPs can be approximated as the same plane. Once the radar formation does not meet the requirement, the method in this paper need some modification.
Based on the above assumption, the imaging of the target is the projection in that plane. In this paper, the IPP is the XOY plane in
Figure 1. Although any value of
and
can be set at this case, in reality, we still have to rationalize the radar and limit the value of
,
and
. At the same time, assuming that the target has been projected onto the XOY plane, then let
, so
can be re-expressed as
Equations (
7) and (
8) are the new echo expression, which will be analyzed to get the target’s image.
For the distributed ISAR system, the observation time is short and
is small. By using approximations
and
in the range dimension, which are reasonable as the range error caused by these approximations is negigible compared with the range resolution, the compressed range of the
qth scatterer can be expressed as
While in the cross-range dimension, we use more accurate approximations
,
. Based on these approximations,
can be rewritten as
In Equation (
10), the first exponent term is a constant related to the scattering point and transmitting/receiving channel, which has no effect on the imaging result. The cross-range imaging information is contained in the second exponential phase. For nonuniform rotation target,
is a polynomial function and
is a polynomial phase signal (PPS). If the cross-range compression is achieved by using the FT, the second order terms (corresponding to the rotational acceleration) or even higher order terms (corresponding to the high order rotational motion) of
will make the image blurred in the cross-range dimension. To avoid this effect, instead of using FT-based imaging method, the MFT is used here for cross-range compression. The MFT is a generalization of the FT and can effectively deal with the PPS.
5. Conclusions
The distributed ISAR technique can utilize the data acquired from multiple observation angles. In this paper, a method which combines the distributed ISAR technique and the MFT is proposed to obtain the image of a high-maneuvering target. Two main problems, i.e., the RCS fluctuation and nonuniform rotation of high-maneuvering target, are solved.
In this paper, we assume that all IPPs are the same, which simplifies the processing chain. It should be pointed out that in order to satisfy this requirement, the radar sensors should not be placed far apart from each other in real scene. Based on the assumption, the multiple channel echoes of nonuniform rotation target are acquired from different observation angles firstly. Secondly, using the MFT for all channel echoes can avoid the azimuthal defocusing problem caused by the FT and get well-focused subimages. At the same time, by estimating the rotation parameters once, the MFT can apply directly to all the range bins of all echoes from different observation channel, which is computational effective. Thirdly, after the processing and accumulating of all subimages, the final fusion image can be acquired. To reduce the influence of RCS fluctuations, the accumulation coefficients are determined adaptively according to the subimage entropies. The simulations and experiment show the effectiveness of the proposed method to overcome the RCS fluctuation and can result in a final image with improved quality.
Thus, the innovativeness and contribution of this article are:
Based on the characteristics of the high-maneuvering target, the distributed ISAR technique is used to observe the target from multi-channels with different RCS values.
The MFT is applied to the echo of each channel to acquire well-focused subimages, which is computational efficient compared with other imaging methods.
Subimages fusion with adaptive coefficients calculated according to subimage entropies can effectively overcome the RCS fluctuations.
In this paper, the rotation around the Z-axis is considered only. However, in real scene, the target always has three-dimensional rotation which is more complicated. In this case, obtaining stable and recognized ISAR image is challenge. Meanwhile, the IPPs are assumed to be the same in this paper. If the radar formation is not strictly limited, this assumption does not hold. Therefore, the subimages from different observation channels cannot be accumulated directly. Further research is necessary to increase the applicability of the method in this paper. Another research area of interest is how to use the raw data of each channel to get a ISAR image with higher resolution when the target rotates nonuniformly.