Next Article in Journal
Object Identification and Safe Route Recommendation Based on Human Flow for the Visually Impaired
Next Article in Special Issue
Crowd-Based Cognitive Perception of the Physical World: Towards the Internet of Senses
Previous Article in Journal
MuTraff: A Smart-City Multi-Map Traffic Routing Framework
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Signal Processing and Target Fusion Detection via Dual Platform Radar Cooperative Illumination

1
Air Force Early Warning Academy, Wuhan 430019, China
2
The 93552 Troop of Chinese People’s Liberation Army, Shijiazhuang 050081, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(24), 5341; https://doi.org/10.3390/s19245341
Submission received: 8 November 2019 / Revised: 28 November 2019 / Accepted: 3 December 2019 / Published: 4 December 2019
(This article belongs to the Special Issue Sensor Network Signal Processing)

Abstract

:
A modified signal processing and target fusion detection method based on the dual platform cooperative detection model is proposed in this paper. In this model, a single transmitter and dual receiver radar system is adopted, which can form a single radar and bistatic radar system, respectively. Clutter suppression is achieved by an adaptive moving target indicator (AMTI). By combining the AMTI technology and the traditional radar signal processing technology (i.e., pulse compression and coherent accumulation processing), the SNR is improved, and false targets generated by direct wave are suppressed. The decision matrix is obtained by cell averaging constant false alarm (CA-CFAR) and order statistics constant false alarm (OS-CFAR) processing. Then, the echo signals processed in the two receivers are fused by the AND-like fusion rule and OR-like fusion rule, and the detection probability after fusion detection in different cases is analyzed. Finally, the performance of the proposed method is quantitatively analyzed. Experimental results based on simulated data demonstrate that: (1) The bistatic radar system with a split transceiver has a larger detection distance than the single radar system, but the influence of clutter is greater; (2) the direct wave can be eliminated effectively, and no false target can be formed after suppression; (3) the detection probability of the bistatic radar system with split transceivers is higher than that of the single radar system; and (4) the detection probability of signal fusion detection based on two receivers is higher than that of the bistatic radar system and single radar system.

1. Introduction

With the increasing complexity of the battlefield electromagnetic environment, the single airborne early warning (AEW) radar is facing increasingly more challenges [1,2,3]. On the one hand, limited by the detection range, the single AEW is easy to be detected by the enemy; on the other hand, the maneuverability of AEW is limited, and the battlefield survivability is weak. An idea to solve this problem is to cooperate with a multi-platform radar system. Specifically, the cooperative detection mode of the AEW and unmanned aerial vehicle (UAV) has attracted extensive research [4,5,6]. In general, the UAV is in front and receives signals passively while the AEW is located in the safe rear and receives signals actively. In this model, the detection range can be expanded, and the problems of a huge radar blind area and radar detection efficiency caused by a large time-bandwidth signal can be improved to a certain extent [7,8,9,10]. Moreover, the separation of receiving and transmitting of the system is helpful to improve the detection ability of the stealth target and the anti-jamming ability [11,12,13,14]. In a summary, the cooperative detection mode of AEW and UAV is of great research significance.
Clutter and direct-wave interference are the main challenges for multi-platform cooperative detection. Recently, many researchers have carried out relevant research on airborne cooperative detection, and lots of cooperative detection methods have been proposed [15,16,17,18,19,20]. In [15], Wu studied the clutter modeling of an airborne bistatic radar based on a geometric model. Wei [16] studied the range-dependent compensation method for airborne bistatic radar clutter, in which the projection point of the receiver on the horizontal plane was represented as the origin and the projection of the baseline on the horizontal plane as the Y or X axis. Based on models of [15,16], Yang [17] analyzed the non-stationary characteristics of the clutter spectrum with range space variation and the serious aliasing characteristics of the clutter caused by range and Doppler ambiguity. A knowledge-assisted space-time adaptive processing method was proposed, which greatly improves the performance of clutter suppression of the high-speed airborne radar. In the aspect of suppressing the direct wave, Chen [18] proposed a method of direct wave suppression based on subspace projection, where an orthogonal projection matrix of a direct wave subspace was constructed using the equivalent signal of the receiving end, and the received signal was projected to suppress direct wave interference. The authors of [19] investigated a bistatic sonar for underwater target detection, and provided a deep zero trap to suppress direct wave signals in the direct wave direction through the beam zero suppression algorithm. In [20], for the two-channel passive radar system, the least square method of vector space was used to suppress the direct wave signals in the target channel, although it could not completely suppress the direct wave signals, and some of them were left. The authors of [21,22,23] mainly described the algorithm of time-reversal imaging, which was used to analyze the process of active detection. The authors of [24] focused on the classic problem of testing samples drawn from independent Bernoulli probability mass functions, when the success probability under the alternative hypothesis was unknown. Most of the above literature conducted studies on clutter suppression or direct wave suppression or fusion detection separately, mainly considering the problem of improving SNR. Moreover, the overall research background was inconsistent and there were many constraints. However, there were few joint studies on direct wave suppression under the clutter condition, and few studies on the influence of direct wave suppression on target fusion detection or positioning.
In this paper, we studied the signal processing and target fusion detection of an airborne cooperative detection system under the clutter condition. First, we constructed a dual platform cooperative detection system geometric model of the application scene, which uses AMTI to suppress clutter. Then, the AMTI, pulse compression, and phase-coherent accumulation were used to suppress the direct wave to suppress the influence of the false target generated by the direct wave during target detection. Finally, the echo received by the two receivers was used for fusion detection to improve the detection performance of the target. The dual platform cooperative detection system constructed in this paper has the advantages of a bistatic radar system and single radar system [25,26,27]. The detection range was extended and the detection performance improved, which lays a foundation for future research of the airborne radar cooperative detection system.
The rest of this paper is organized as follows. In Section 2, the system configuration model of cooperative illumination is established, and the echo model of the system is derived by taking linear frequency modulation (LFM) signal as an example. The signal processing and fusion detection flow of the cooperative detection system is designed, and the theoretical derivation and numerical simulation are carried out in Section 3. Section 4 verifies the feasibility and effectiveness of the signal processing and fusion detection algorithm by the simulation experiment of the designed cooperative detection system. Conclusions are presented in Section 5.
The notations used in this paper are shown in Table 1.

2. Modeling

The scene relationship between the airborne cooperative detection system and target is shown in Figure 1, in which the AEW and the UAV constitute a dual platform cooperative detection system, and the AEW is used as a transmitter and receiver ( T / R ), and the UAV is used as a receiver ( R ). The working process of the cooperative system can be expressed as follows: Firstly, the AEW transmits signal to irradiate target T g . Then, target T g reflects the signal. Finally, AEW and UAV will receive the target echo, respectively. The AEW ( T / R ) is a single radar system, and then the AEW ( T ) and the UAV form a bistatic radar system. In fact, the side lobe signal of AEW will also be directly received by UAV ( R ), which means that the echo signal received by the UAV includes both the target reflection signal and the side lobe direct wave signal. In Figure 1, the target echo power received by T / R is denoted as P 0 , the power of the target echo signal reaching the R is denoted as P r , the power of the AEW side lobe signal reaching the R directly is denoted as P z , r 1 is the distance from T / R to T g , r 2 is the distance from T g to R , and r 3 is the distance from T / R to R .
Further, the Cartesian coordinate system is established by taking the position of the transmitter as the origin and taking the transmitter-to-receiver connection as the X-axis. The Y-axis and the Z-axis are perpendicular to the X-axis, satisfying the right-hand criterion. The geometric relationship between the transmitter, receiver, and target is shown in Figure 2. The coordinates of the transmitter are r t ( x t , y t , z t ) , that of the receiver are r r ( x r , y r , z r ) , and that of the target are r tg ( x , y , z ) . The azimuth angle of the target relative to the transmitter is denoted as φ 1 , where the pitch angle is denoted as ε 1 and the azimuth angle of the target relative to the receiver is denoted as φ 2 , where the pitch angle is denoted as ε 2 .
According to the cooperative detection system geometric model, the expression of the signal model is derived.
The method in this paper is suitable for LFM waveforms and coded signal. In this paper, the LFM signal is selected for echo signal derivation and simulation.
Assuming that the radar transmits LFM signal, the signal can be expressed as:
s ( t ) = rect ( t f T p ) exp ( j 2 π ( f c t + 1 2 K t f 2 ) ) ,
where rect ( · ) is the rectangle function; f c is the carrier frequency; T p is the pulse width; K is the frequency modulation slope, K = B T p ; B is the signal bandwidth; and t f is the fast-time. Then, the corresponding echo signal can be expressed as:
s r ( t f , τ ) = rect ( t f τ T p ) exp ( j 2 π f c τ ) exp ( j π K ( t f τ ) 2 ) ,
where τ is the delay.
For a single radar system with an integrated transceiver and receiver, the delay is expressed as:
τ a = 2 ( r 1 + v 1 t n ) c ,
where t n is the slow-time and v 1 is the radial velocity between the transmitter and the target, which is given by:
v 1 = ( v tg v t ) ( r tg r t | r tg r t | ) ,
where v t is the transmitter velocity vector and v t g is the target velocity vector.
The target echo signal of a single radar system is:
s ra ( t f , t n ) = rect ( t f τ a T p ) exp ( j 2 π f c τ a ) exp ( j π K ( t f τ a ) 2 ) = rect [ 1 T p ( t f 2 ( r 1 + v 1 t n ) c ) ] exp [ j 2 π f c ( 2 ( r 1 + v 1 t n ) c ) ] exp [ j π K ( t f 2 ( r 1 + v 1 t n ) c ) 2 ]
Then, the final echo of a single radar receiver can be expressed as:
S r 0 = s ra ( t f , t n ) + σ n s n ,
where σ n is the noise amplitude and σ n = 1 10 S N R / 10 , S N R is the signal-to-noise ratio, and s n is the white noise.
For bistatic radar systems, the time delay is:
τ b = r 1 + v 1 t n + r 2 + v 2 t n c ,
where v 2 is the radial velocity between the target and the receiver, expressed as:
τ b = r 1 + v 1 t n + r 2 + v 2 t n c ,
where v r is the receiver velocity vector.
Then, the target echo signal of a bistatic radar system is:
s rb ( t f , t n ) = rect ( t f τ b T p ) exp ( j 2 π f c τ b ) exp ( j π K ( t f τ b ) 2 ) = rect [ 1 T p ( t f r 1 + v 1 t n + r 2 + v 2 t n c ) ] exp [ j 2 π f c ( r 1 + v 1 t n + r 2 + v 2 t n c ) ] exp [ j π K ( t f r 1 + v 1 t n + r 2 + v 2 t n c ) 2 ]
The time delay of the direct wave signal is:
τ d = r 3 + v 3 t n c ,
where v 3 is the radial velocity between the transmitter and receiver, expressed as:
v 3 = ( v r v t ) ( r r r t | r r r t | ) .
Then, the direct wave signal received by the receiver can be expressed as:
s rd ( t f , t n ) = rect ( t f τ d T p ) exp ( j 2 π f c τ d ) exp ( j π K ( t f τ d ) 2 ) = rect [ 1 T p ( t f r 3 + v 3 t n c ) ] exp [ j 2 π f c ( r 3 + v 3 t n c ) ] exp [ j π K ( t f r 3 + v 3 t n c ) 2 ]
The final echo of the receiver can be expressed as:
S r 1 = s rb ( t f , t n ) + σ z s rd ( t f , t n ) + σ n s n ,
where σ z is the amplitude of the direct wave and σ z = 1 10 S J R / 10 , and S J R is the signal-to-jamming ratio.

3. Target Cooperative Detection for the Cooperative Detection System.

According to the cooperative detection system model mentioned above, in this section, the echo simulation of single radar system and bistatic radar system was carried out. Then, the signal was processed. The constant false alarm (CFAR) processor was carried out to obtain the detection results. Finally, the target was located according to the detection results. Meanwhile, in order to improve the detection performance, the echo results of the two receivers were fused to obtain the detection results. The block diagram of the processing process is shown in Figure 3.
Without loss of generality, the following assumptions were made before introducing the cooperative detection method:
(1)
The receiving platform and the transmitting platform have their own navigation system, which can get their own position information in real time and communicate with each other.
(2)
The target is not at the base line of the transceiver and receiver platform, which means there is a time delay between the direct wave and the echo arriving at the receiver.
(3)
The main lobe direction of the radar antenna is known, which is a three-dimensional radar.
(4)
The arrival time of the echo can be measured.

3.1. Detection Range Comparison

According to the preset scene, the power expression of the target echo signal arriving at the radar receiver of the AEW is deduced as:
P 0 = P t G t 2 σ λ 2 ( 4 π ) 3 r 1 4 L 1 L e = P t + 2 G t + σ + 2 λ 33 4 r 1 L 1 L e ,
where P t is the radar transmitting peak power, G t is the gain of the radar transmitting antenna in the target direction, σ is the radar sectional area of the target in the direction of the AEW, λ is the working wavelength of the radar, L 1 is the radar receiver loss of the AEW, and L e is other losses (transmission loss, atmospheric loss, pulse pressure loss, etc.).
The AEW radar transmits signals. Then, the signals reach the receiver radar after the target reflection. The power received by the receiver radar is expressed as:
P r = P t G t G r σ λ 2 ( 4 π ) 3 r 1 2 r 2 2 L 2 L e = P t + G t + G r + σ + 2 λ 33 2 r 1 2 r 2 L 2 L e ,
where G r is the main lobe gain of the passive receiver antenna, σ is the radar cross-sectional area of the target in the direction of the receiver (bistatic radar cross-sectional area), L 2 is the passive receiver loss, and L e is other losses.
The power expression of the radar side lobe signal received by the receiver radar from the AEW is as follows:
P z = P t G t A r 4 π r 3 2 L 2 L e = P t G t G r λ 2 ( 4 π r 3 ) 2 L 2 L e = P t + G t + G r + 2 λ 22 2 r 3 L 2 L e ,
where A r is the effective receiving area of the receiver, G t is the sidelobe gain of the AEW radar transmitting antenna, G r is the sidelobe gain of the receiver antenna, and L e is other losses.
According to the above expression, the ratio of the target echo power of the receiver to the target echo power of the radar receiver of the AEW can be expressed as:
K 1 = P r P 0 = ( P t G t G r σ λ 2 ( 4 π ) 3 r 1 2 r 2 2 L 2 L e ) / ( P t G t 2 σ λ 2 ( 4 π ) 3 r 1 4 L 1 L e ) = G r r 1 2 σ L 1 L e G t r 2 2 σ L 2 L e .
In order to analyze the main factors affecting the ratio, K 1 , assuming L 1 L e L 2 L e 1 , there is:
K 1 = P r P 0 G r r 1 2 σ G t r 2 2 σ .
If the main lobe gain of the radar antenna of the receiver, G r , is equal to that of the radar antenna of the AEW, G t , the cross-section area of the bistatic radar is larger than that of the single radar, and the distance between the receiver and the target is smaller than that between the AEW and the target, i.e., K 1 1 . According to the formula, when K 1 1 ( K 1 0 dB ) , the receiving echo power of the receiver is greater than or equal to the receiving echo power of the AEW. If the processing gain of the approximate matched filtering can be obtained in the process of signal detection, the detection ability of the bistatic system is equal to that of the active detection ability of the AEW radar, and the target can be detected at this time.
In this paper, FEKO was used to simulate the cross-sectional area of the single radar and bistatic radar. The target model is shown in Figure 4, and the simulation results of the cross-sectional area of the single radar and bistatic radar are shown in Figure 5.
According to the above formula derivation and radar sectional area, numerical simulation analysis was conducted, and the results are shown in Figure 6.
Figure 6 is the value distribution diagram of the ratio, K 1 , Figure 6a is the three-dimensional diagram of the value distribution of K 1 , and Figure 6b is the isogram of the value distribution of K 1 . It can be seen from the figures that in the range of 500 km, except for some positions near the launch platform, the receiving power of the bistatic radar is less than that of the AEW radar, and other positions are larger than that of the AEW radar. That is to say, the distance of bistatic detectable targets is farther than that of AEW, namely the detection performance of the bistatic radar is better than that of AEW. At the same time, it can be seen from the figure that the location distribution of K 1 < 0     dB is close to the launching platform. For the cooperative detection of AEW and UAV, the AEW is located in the rear of the security, the receiver goes forward and receives the signal silently, and the detection performance of the target in the detection area is slightly affected. The detection range of the two-platform cooperative detection system is extended, and the detection range is complementary to each other, so it has more advantages in target detection.

3.2. Direct Wave Suppression

The ratio of receiver target echo to direct wave power can be expressed as:
K 2 = P r P z = ( P t G t G r σ λ 2 ( 4 π ) 3 r 1 2 r 2 2 L 2 L e ) / ( P t G t G r λ 2 ( 4 π r 3 ) 2 L 2 L e ) = G t G r r 3 2 σ L e 4 π G t G r r 1 2 r 2 2 L e .
The power ratio of target echo to direct wave can be estimated under different conditions.
In order to analyze the main factors affecting the ratio, K 2 , assuming L e L e 1 , there is:
K 2 = P r P z 1 4 π G r G t r 3 2 σ G r G t r 1 2 r 2 2 = M r 3 2 σ 4 π r 1 2 r 2 2 ,
where M = G r G t G r G t .
It was assumed that the other losses of the AEW and the receiver are similar, and the processing gain of the approximate matched filter can be obtained in the process of signal detection for the received target echo signal. Any M = 44     dB was selected, and K 2 was simulated at different target positions according to the cross-sectional area of the bistatic radar. The results are shown in Figure 7, where Figure 7a is the three-dimensional diagram of the value distribution of the ratio, and Figure 7b is the isogram of the value distribution of the ratio. As can be seen from the figure, in the simulation area, the ratio is the highest at the location of the transceiver platform, which is −11 dB, while the other locations are basically below −30 dB, namely K 2 1 . The power of the direct wave is far greater than that of the bistatic radar receiver. If the direct wave is not processed, the direct wave signal will enter the target echo channel from the antenna side lobe of the receiver to form false target interference, and at the same time, the detection threshold will be raised, and the target signal cannot be detected.

3.3. CFAR Detection

The number of reference units detected by CFAR is N. The distribution of reference units detected by CFAR is shown in Figure 8. In the figure, X is the unit to be tested. The left and right units of X are set as protection units, with N/2 reference units on the left and right.
According to N reference units, the CFAR processor can get a relative estimation of the background intensity, Z, which is related to the CFAR detection method. In the multiplier, the decision threshold, TZ, can be obtained by multiplying the estimated value, Z, by a threshold weighting coefficient, T, where T = N p f 1 / N 1 and p f is the false alarm probability. In the comparator, the detected unit is compared with the decision threshold, TZ. If X is larger than TZ, the output is 1; otherwise, the output is 0.
In the CFAR detector, there are two methods to calculate the signal evaluation, Z, of N reference units: The CA-CFAR detection method and the OS-CFAR detection method. CA-CFAR has good detection performance in a uniform environment, and OS-CFAR has obvious advantages in a clutter edge and multi-target environment.
In the CA-CFAR detector, Z is the average value of all reference unit signals, x i , i = 1 , , N :
Z = 1 N i = 1 N x i .
In the OS-CFAR detector, by sorting the signals, x i , i = 1 , , N , of N reference units according to amplitude from large to small, one of the order values is taken as the background noise to estimate Z, then:
x ( 1 ) x ( 2 ) x ( k ) x ( N ) Z = x ( k )

3.4. Fusion Detection Probability

The cooperative detection system mentioned in this paper can form a single radar system and bistatic radar system. The fusion detection can be carried out in different processing units, which can be divided into measurement fusion and decision fusion. Figure 9 shows the topology structure of fusion detection. In Figure 9a, measurement fusion is that each receiver can observe independently to get the observation matrix, and then a fusion center receives the observation information of receivers for judgment. In Figure 9b, decision fusion is that each receiver observes and judges independently, and then a fusion center receives the decision information of receivers, and a global decision is obtained.
For measurement fusion, after CFAR detection in the signal processing flow, the measurement matrices of the systems can be obtained respectively, namely, R 1 , R 2 , , R n . The matrices can be superposed and fused. Then, the existence of the target can be determined, and the detection probability can be calculated through multiple tests of the Monte-Carlo method.
For decision fusion, the two systems separately detect and decide whether there is a target or not. The detection probability is calculated by the Monte-Carlo method, and the detection probability after the decision is fused.
The cooperative detection system is composed of n receivers R i ( i = 1 , 2 , , n ) , and the detection probability of a target is P ( R i ) .
The OR-like fusion rule can be expressed as:
u 0 = i = 1 n u i , { u 0 = 1 u 0 1 u 0 = 0 u 0 < 1 .
The total probability of simultaneous detection of the target by each receiver is:
P s o r = P ( R 1 R 2 R n ) = 1 P ( R ¯ 1 ) P ( R ¯ 2 ) P ( R ¯ n ) .
Therefore, the AND-like fusion rule can be expressed as:
u 0 = i = 1 n u i , { u 0 = 1 u 0 1 u 0 = 0 u 0 < 1 .
The total probability of simultaneous detection of the target by each receiver is:
P s a n d = P ( R 1 R 2 R n ) = P ( R 1 ) P ( R 2 ) P ( R n ) .
The cooperative detection system in this paper is composed of a single base system and double base system. The detection probability of two receivers is P ( R 1 ) and P ( R 2 ) , respectively. Then, for OR-like fusion rule, the fusion detection probability can be expressed as
P s o r = 1 P ( R ¯ 1 ) P ( R ¯ 2 )
For the AND-like fusion rule, the fusion detection probability can be expressed as:
P s and = P ( R 1 ) P ( R 2 )
Now, we analyzed the complexity of the two algorithms. Without loss of generality, one-time multiplication or one-time addition is regarded as a basic calculation amount, and the sum of all addition and multiplication times in an algorithm is regarded as the basic operation amount of the algorithm [28]. The process of the measurement-fusion algorithm and decision-fusion algorithm is the same before getting the measurement vector, so the complexity analysis of this paper starts from the measurement vector. Assuming that the length of measurement vector is M, the number of fusion sensors is two, the number of CFAR detection reference units is N, and the number of Monte-Carlo experiments is n l . For measurement fusion, firstly, the measurement vector is fused. The complexity of the OR-like rule is the same as the AND-like rule, which is recorded as O ( M ) . Then, the fusion vector is judged. First, the reference threshold is calculated. The mean value of reference cells is calculated by the CA-CFAR algorithm, with the complexity of O ( M N ) . The reference cells are sorted by the OS-CFAR algorithm, with the complexity of O ( M N 2 ) . Then, whether there is a target is judged, with the complexity of O ( M ) . For decision fusion, the measurement vectors are judged respectively. First, the reference threshold is calculated, and the average value of the reference units is calculated by the CA-CFAR algorithm, with the complexity of O ( 2 M N ) . The reference units are ordered by the OS-CFAR algorithm, with the complexity of O ( 2 M N 2 ) . Then, whether there is a target is judged, and the complexity of the decision is O ( 2 M ) . Then, the decision vectors are fused, or the complexity of the criteria is the same as that of the criteria, which is recorded as O ( M ) . Finally, the Monte-Carlo experiment is carried out, and the complexity coefficient is n l . The total computational complexity of different fusion methods combined with different CFAR algorithms is shown in Table 2. It can be seen that the computation of the measurement fusion is slightly less than that of the decision fusion, and the computation of OS-CFAR is slightly more than that of CA-CFAR.

3.5. Estimation of Target Position Parameters

In the scenario assumed in this paper, the position of T / R 1 ( x t , y t , z t ) and R 2 ( x r , y r , z r ) is known, and the baseline length, r 3 , is known, expressed as:
r 3 = ( x t x r ) 2 + ( y t y r ) 2 + ( z t z r ) 2 .
The delay time of the direct wave echo, t 1 , can be estimated:
t 1 = r 3 c ,
where c = 3 × 10 8 m / s is the propagation velocity of the electromagnetic wave.
Converted to the corresponding distance unit, then:
R g 1 = c t 1 2 ,
N z = 2 ( R g 1 R min ) c T s ,
where T s is the sampling time period, R min is the minimum detection distance and R g 1 is the false target range parameter.
According to the geometric relationship in Figure 2, the following equation can be obtained:
{ r 1 = ( x t x ) 2 + ( y t y ) 2 + ( z t z ) 2 r 2 = ( x r x ) 2 + ( y r y ) 2 + ( z r z ) 2 tan φ 1 = y y t x x t sin ε 1 = z z t r 1 ,
where φ 1 and ε 1 are the azimuth angle and pitch angle corresponding to the center line of tthte transmitter antenna main lobe, respectively.
The direct wave time is expressed as t 1 and the echo time is expressed as t 2 , then:
c ( t 2 t 1 ) = r 1 + r 2 r 3 .
By solving Equations (33) and (34), the location information of the target can be obtained, and the target location can be carried out according to the estimated parameters.

4. Simulation Analysis of Target Cooperative Detection

According to the commonly used data range of the existing equipment working mode, simulation parameters were set. The parameters are shown in Table 3.

4.1. Analysis of Simulation Results of Single-Base Echo

According to the settings and simulation parameters, the simulation results of echo and signal processing of single-base radar system are shown in Figure 10. Figure 10a is the three-dimensional information of the received echo, in which the target signal is completely submerged. Figure 10b is the spectrum of the echo signal. It can be seen that the echo Doppler frequencies are mainly concentrated in two places, one of which exists from 50 to 200 km, and the power is high, which is the clutter Doppler frequency generated by the platform motion. The other is 120 km, with a small range of distance and Doppler frequency generated by the relative motion of the platform and the target. Figure 10c is the echo signal processed by AMTI. It can be seen that the clutter is well suppressed and the target echo signal appears. Figure 10d is the spectrum of the echo signal processed by AMTI, and the clutter Doppler is basically suppressed. Theoretically, the SNR gain after pulse compression is 14.77 dB, and the two-dimensional and three-dimensional images of the echo signal processed by pulse compression are shown in Figure 10e,f. After pulse compression, the target signal is prominent, and the output SNR is increased. In order to further improve the SNR, the theoretical SNR gain after phase-coherent processing is 18.06 dB. Figure 10g,h are the two-dimensional and three-dimensional images of the echo signal after phase-coherent accumulation processing, and it can be seen that the target signal is more prominent. Figure 10i is the result of the CA-CFAR test, and Figure 10j is the result of the OS-CFAR test. The target signal is detected when it is above the threshold, and Figure 10k is the decision result. The position parameters of the target can be obtained through the geometric relation and radar beam angle. Figure 10l indicates the position of the target coordinate obtained by simulation.
The final positioning error can be obtained by comparing the target position parameters obtained by system detection with the actual positioning parameters. The simulation assumes that the coordinates of the target position are Tgd = [110 40 2] km, and estimates that the target position are Tg1d = [110.1207 40.0439 1.9934] km according to the simulation results; then, the position errors are expressed as Δd = [120.7 43.9 −6.6] m, i.e., 0.11%.

4.2. Analysis of Simulation Results of Double-Base Echo

According to the set scene and simulation parameters, the simulation results of the echo and signal processing of the bistatic radar system are shown in Figure 11. Figure 11a is the three-dimensional information of the received echo, in which the target signal is completely submerged. Figure 11b is the spectrum of the echo signal. It can be seen that the echo Doppler frequencies are mainly concentrated in four places, two of which exist from 50 to 200 km, and the power is high, which is the clutter Doppler frequency generated by the platform motion. The other two exist in a small range, with the Doppler frequency generated by the relative motion of the platform and the target and the Doppler frequency generated by the direct wave, and the one with higher power is generated by the direct wave. Figure 11c shows the echo signal processed by AMTI. It can be seen that the clutter is well suppressed while the target echo signal appears. Figure 11d is the spectrum of the echo signal processed by AMTI, and the clutter Doppler spectrum is basically suppressed. Figure 11e is an echo signal processed by MTI. It can be seen that the clutter near the target signal is well suppressed and the target echo signal appears. Figure 11f is the spectrum of the echo signal processed by MTI, and the clutter spectrum near zero Doppler is basically suppressed and the direct Doppler spectrum is suppressed. Figure 11g,h are respectively two-dimensional and three-dimensional echo signals processed by pulse compression. After pulse compression, the target signal is prominent, the output SNR is increased, and the direct wave signal is further suppressed. Figure 11i,j are two-dimensional and three-dimensional images of echo signal processed by phase-coherent accumulation. It can be seen that the target signal is more prominent, and the influence of the clutter and direct wave is reduced. Figure 11k is the result of CA-CFAR detection, and Figure 11l is the result of OS-CFAR detection. The target signal is detected when it is above the threshold, and Figure 11m is the decision result. Figure 11n shows the target coordinate position obtained by simulation.
The final positioning error can be obtained by comparing the target position parameters obtained by system detection with the actual positioning parameters. The simulation assumes that the coordinates of the target position are Tgs = [110 40 2] km, and estimates that the target position is Tg1s = [110.099 40.032 1.995] km according to the simulation results; then, the position errors are expressed as Δs = [99 32 −4.9] m, i.e., 0.09%.

4.3. Simulation Analysis of CFAR Detection

For the description of CA-CFAR algorithm and OS-CFAR algorithm in Section 3.3, the simulation was performed according to the parameters set previously. The number of reference units in CFAR algorithm was set to 48, and the ordinal value of OS-CFAR was set to 18. Assuming that the echo power of the single radar and bistatic radar at the target position is equal, the Monte-Carlo method was used to calculate the target detection probability under the two algorithms. The simulation results are shown in Figure 12. It can be seen from the figure that when the SNR is −8 dB, the detection probability of the single radar using the CA-CFAR algorithm is 52.6%, and that of the bistatic radar is 72.3%. When using the OS-CFAR algorithm, the detection probability of the single radar is 69.5%, and that of the bistatic radar is 77.5%. That is, under the same SNR, the detection probability of the OS-CFAR algorithm is higher, and it has more advantages in a clutter background. In the design of the distributed architecture in this paper, thte OS-CFAR algorithm is preferred.

4.4. Simulation Analysis of Fusion Detection

According to the simulation results in the previous three sections, it can be found that the detection probability of the bistatic radar system is higher when the SNR is lower than −5 dB. In the background of clutter, the OS-CFAR algorithm has more advantages. In this paper, the local detection adopts the OS-CFAR algorithm. In order to improve the detection probability of the system, the received echo of the single radar and the received echo of the bistatic radar after signal processing were fused for detection.
From the simulation results of the ratio of the echo power of single-base radar to the target echo power of bistatic radar in Figure 5, it can be known that the value of power is different in different scenarios. According to the simulation parameters set in Section 3.1, the single radar detection results and bistatic radar detection results were fused and simulated by the Monte-Carlo method to analyze the detection probability of the fusion treatment in the following different scenarios.

4.4.1. K 1 > 1

When the power of the bistatic radar echo at the target position is greater than the power of the single radar echo, the detection performance curve obtained by simulation is shown in Figure 13. It can be seen from the figure that the detection probability of the three systems is different when the SNR is between −15 and −5 dB. When the SNR is −8 dB, the detection probability of the single radar echo is 74%, the detection probability of the bistatic radar echo is 92.5%, the fusion detection probability of the OR-like fusion rule is 97.9%, and the fusion detection probability of the AND-like fusion rule is 68.6%. That is, the detection performance of the double base radar echo is better than that of the single base radar echo at the same SNR, and the detection probability is higher. The detection performance curve of the cooperative detection system after fusion detection of the OR-like fusion rule is much better than that of the single radar system, but it is only slightly better than that of the bistatic radar system. Moreover, the detection performance curve of the cooperative detection system after fusion detection of the AND-like fusion rule is much worse than that of the bistatic radar system, but it is only slightly worse than that of the single radar system. That is to say, when the power of the bistatic radar echo is high, the contribution of the single radar echo signal to fusion detection is limited.

4.4.2. K 1 = 1

When the dual-base echo power is the same as the single-base echo power at the target location, the simulation results of the detection performance curve are shown in Figure 14. It can be seen from the figure that the detection probability of the three systems is different when the SNR is between −13 and −5 dB. When the SNR is −8 dB, the detection probability of the single radar echo is 69.5%, the detection probability of the double base echo is 77.5%, the fusion detection probability of the OR-like fusion rule is 93.7%, and the fusion detection probability of the AND-like fusion rule is 53.3%. That is, under the same SNR, the detection performance of the bistatic radar echo is better than that of the single radar echo, and the detection probability is higher. After fusion detection of the OR-like fusion rule, the detection performance curve is better than that of the bistatic radar system and single radar system. Additionally, for the AND-like fusion rule, the detection performance curve is worse than that of the bistatic radar system and single radar system.

4.4.3. K 1 < 1

When the bistatic radar echo power at the target position is greater than the single radar echo power, the detection performance curve obtained by simulation for this is shown in Figure 15. It can be seen from the figure that the detection probability of the three systems is different when the SNR is between −13 and −5 dB. When the SNR is −8 dB, the detection probability of the single radar echo is 70.2%, the detection probability of the double base echo is 53.1%, the fusion detection probability of the OR-like fusion rule is 86.1%, and the fusion detection probability of the AND-like fusion rule is 37.2%. That is, under the same SNR, the detection performance of the single radar echo is better than that of the bistatic radar echo, and the detection probability is higher. After fusion detection of the OR-like fusion rule, the detection performance curve is better than that of the bistatic radar system and single radar system. Additionally, for the AND-like fusion rule, the detection performance curve is worse than that of the bistatic radar system and single radar system.
According to the simulation analysis of the above three different situations, the results are as follows. For the fusion rules, there is a OR-like fusion rule and AND-like fusion rule, where the detection probability under the OR-like fusion rule is high, and the performance of dual sensors is improved comprehensively. The detection probability is low under the AND-like fusion rule, and the dual sensor system becomes a single sensor detection, that is, the “degradation” of the AND-like fusion under the non-uniform SNR. Compared with AND-like fusion, OR-like fusion basically has no degradation problem. The detection condition of one sensor in the dual sensor is very poor, and the detection performance of the dual sensor is better than that of the single sensor.
In the distributed cooperative detection system, OR-like fusion can make full use of the detection ability of each sensor, reflecting the performance advantage of cooperative detection. In this paper, the cooperative detection system of two platforms was mainly designed. The main consideration is to improve the detection probability of enemy targets as much as possible, making full use of the detection ability of each sensor as much as possible, and the complementary performance of a single radar and bistatic radar in the detection area. Therefore, the detection performance under the OR-like fusion rule was mainly considered in the design.

5. Conclusions

In this paper, the dual platform cooperative detection system was taken as the research object. We studied the signal processing and target fusion detection of an airborne cooperative detection system under the clutter condition. First, we constructed a dual platform cooperative detection system geometric model, which can form a single radar and bistatic radar system, respectively, and then signal processing. Clutter suppression was achieved by AMTI. Then, the AMTI, pulse compression, and phase-coherent accumulation were used to suppress the direct wave to suppress the influence of the false target generated by the direct wave during target detection. Finally, the echo received by the two receivers was used for fusion detection to improve the detection performance of the target. The experimental results based on the simulated data demonstrated that: 1) The bistatic radar system with a split transceiver had a larger detection distance than the single radar system, but the influence of the clutter was greater; 2) the direct wave could be eliminated effectively, and no false target could be formed after suppression; 3) the detection probability of the bistatic radar system with split transceivers was higher than that of the single radar system; and 4) the detection probability of signal fusion detection based on two receivers was higher than that of the bistatic radar system and single radar system. The dual platform cooperative detection system constructed in this paper has the advantages of a bistatic radar system and single radar system. The detection range was extended and the detection performance improved.
As for further research approaches, a comparative analysis of different fusion detection methods will be conducted to identify a better fusion detection method. In addition, we will contact the relevant departments to design the actual flight test and verify the effectiveness of the structure and method designed in this manuscript.

Author Contributions

All authors contributed extensively to the study presented in this manuscript. The details are as follows: writing—original draft preparation, H.W.; methodology, Z.T.; writing—review and editing, Y.Z. and Y.C.; data curation, Z.Z., and Y.Z.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61901514, and in part by Young Talent Program of Air Force Early Warning Academy under Grant TJRC425311G11.

Acknowledgments

The authors would like to thanks to the editor and anonymous reviewers for processing our manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gelli, S.; Bacci, A.; Martorella, M.; Berizzi, F. Clutter suppression and high resolution imaging of non-cooperative ground targets for bistatic airborne radar. IEEE Trans. Aerosp. Electron. Syst. 2018, 2, 932–949. [Google Scholar] [CrossRef]
  2. Pan, X.R.; Yang, F.; Gao, L.R.; Chen, Z.C.; Zhang, B.; Fan, H.R.; Ren, J.C. Building extraction from high-resolution aerial imagery using a generative adversarial network with spatial and channel attention mechanisms. Remote Sens. 2019, 11, 917. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, H.J.; Zhou, C.; Zhang, Y.P.; Zhu, Z.B. Analysis of positioning accuracy of space-based distributed cooperative detection. J. Huazhong Univ. Of Sci. Tech (Natural Science Edition). 2018, 8, 55–59. [Google Scholar]
  4. Hu, L.P.; Liang, X.L.; Zhang, J.Q.; Zhu, L. Research on Aircraft Swarms Collaborative Detection of the Space Enemy Stealthy Target. Comput. Simul. 2017, 5, 53–57. [Google Scholar]
  5. He, Z.; Liu, H.; Wang, Y.W.; Hu, J. Generative adversarial networks-based semi-supervised learning for hyperspectral image classification. Remote Sens. 2017, 9, 1042. [Google Scholar] [CrossRef] [Green Version]
  6. Ji, C.X.; Shen, M.W.; Liang, C.; Wu, D.; Zhu, D. An improved OMP application for airborne radar space-time clutter spectrum estimation. In Proceedings of the 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nanjing, China, 24–26 November 2017. [Google Scholar]
  7. Sun, G.H.; He, Z.S.; Tong, J.; Zhang, X.J. Knowledge-Aided covariance matrix estimation via kronecker product expansions for airborne STAP. IEEE Geosci. Remote Sens. Lett. 2018, 4, 527–531. [Google Scholar] [CrossRef]
  8. Zhao, F.X.; Liu, Y.X.; Huo, K.; Zhang, S.H.; Zhang, Z.S. Radar HRRP target recognition based on stacked autoencoder and extreme learning machine. Sensors 2018, 18, 173. [Google Scholar] [CrossRef] [Green Version]
  9. Fu, k.; Dai, W.; Zhang, Y.; Wang, Z.R.; Yan, M.L.; Sun, X. Multiple class activation mapping for aircraft recognition in remote sensing images. Remote Sens. 2019, 11, 544. [Google Scholar] [CrossRef] [Green Version]
  10. Bai, G.T.; Tao, R.; Zhao, J.; Bai, X.; Wang, Y. Fast FOCUSS method based on bi-conjugate gradient and its application to space-time clutter spectrum estimation. Sci. China (Inf. Sci.). 2017, 8, 163–175. [Google Scholar] [CrossRef]
  11. Wang, H.J.; Huang, X.; Zhang, Y.P.; Zhu, Z.B. Positioning accuracy analysis of multistatic radar in jamming background. In Proceedings of the 3rd International Conference on Mechanical, Control and Computer Engineering, Huhhot, China, 14–16 September 2018. [Google Scholar]
  12. Jiang, Y.; Li, Y.; Cai, J.J.; Wang, Y.H.; Xu, J. Robust automatic target recognition via HRRP sequence based on scatter matching. Sensors 2018, 18, 593. [Google Scholar] [CrossRef] [Green Version]
  13. Karine, A.; Toumi, A.; Khenchaf, A.; EI Hassouni, M. Radar target recognition using salient keypoint descriptors and multitask sparse representation. Remote Sens. 2018, 10, 843. [Google Scholar] [CrossRef] [Green Version]
  14. Martorella, M.; Berizzi, F.; Bacci, A.; Gelli, S. Joint physical and virtual STAP for strong ground clutter suppression and imaging. In Proceedings of the IEEE Radar Conference, Oklahoma City, OK, USA, 23–27 April 2018. [Google Scholar]
  15. Wu, H.; Wang, Y.L. Modeling and analysis of the ground clutter spectrum on bistatic airborne early warning radar. Acta Electron. Sin. 2006, 12, 2209–2213. [Google Scholar]
  16. Wei, M.; Li, X.B.; Huang, Z.R. A compensation method for clutter range-dependence of airborne bistatic radar. J. Signal Process. 2017, 1, 18–24. [Google Scholar]
  17. Yang, Y.H.; Zhu, H.; Li, S.M. Clutter properties analysis and suppression methods of high-speed airborne radar. Mod. Radar. 2018, 3, 23–26. [Google Scholar]
  18. Chen, D.F.; Chen, B.X.; Liu, C.B.; Zhang, S.H. Subspace-Projection based direct-path-interference suppression in bistatic GWOTHR. J. Electron. Inf. Technol. 2008, 11, 2702–2705. [Google Scholar] [CrossRef]
  19. Xu, J.F.; Han, S.P.; Yang, G. A high resolution algorithm of wideband direct wave suppression for bistatic sonar. Tech. Acoust. 2017, 5, 415–416. [Google Scholar]
  20. Li, G.J.; Tang, X.M.; Zhang, C.S. Research on Direct Signal Cancellation Algorithm of Passive Bistatic Radar. Fire Control Command Control. 2012, 1, 32–35. [Google Scholar]
  21. Ciuonzo, D. On time-reversal imaging by statistical testing. IEEE Signal Process Lett. 2017, 7, 1024–1028. [Google Scholar] [CrossRef] [Green Version]
  22. Devaney, A.J. Time reversal imaging of obscured targets from multistatic data. IEEE Trans. Antennas Propag. 2005, 5, 1600–1610. [Google Scholar] [CrossRef]
  23. Ciuonzo, D.; Romano, G.; Solimene, R. Performance analysis of time-reversal MUSIC. IEEE Trans. Signal Process. 2015, 10, 2650–2662. [Google Scholar] [CrossRef]
  24. Ciuonzo, D.; Maio, A.D.; Rossi, P.S. A systematic framework for composite hypothesis testing of independent Bernoulli trials. IEEE Signal Process Lett. 2015, 9, 1249–1253. [Google Scholar] [CrossRef]
  25. Mao, Y.; Shen, J.; Gui, X. A study on deep belief net for branch prediction. IEEE Access. 2018, 6, 10779–10786. [Google Scholar] [CrossRef]
  26. Wang, H.J.; Tang, Z.Y.; Zhu, Z.B.; Zhang, Y.P. Modeling and characteristic analysis of clutter for airborne bistatic radar in fixed coordinate system. J. Harbin Inst. Technol. 2019, 5, 116–123. [Google Scholar]
  27. Man, Q.X.; Dong, P.L. Extraction of urban objects in cloud shadows on the basis of fusion of airborne LiDAR and hyperspectral data. Remote Sens. 2019, 11, 713. [Google Scholar] [CrossRef] [Green Version]
  28. Watkins, D. Fundamentals of Matrix Computation; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2010; pp. 71–105. [Google Scholar]
Figure 1. Scene schematic diagram of cooperative irradiation.
Figure 1. Scene schematic diagram of cooperative irradiation.
Sensors 19 05341 g001
Figure 2. Geometric diagram of the cooperative illumination scene.
Figure 2. Geometric diagram of the cooperative illumination scene.
Sensors 19 05341 g002
Figure 3. Block diagram of the radar echo processing for the cooperative detection system.
Figure 3. Block diagram of the radar echo processing for the cooperative detection system.
Sensors 19 05341 g003
Figure 4. Target model and mesh generation.
Figure 4. Target model and mesh generation.
Sensors 19 05341 g004
Figure 5. Simulation of the radar cross-section: (a) Cross-section of the single radar; (b) Cross-section of the bistatic radar.
Figure 5. Simulation of the radar cross-section: (a) Cross-section of the single radar; (b) Cross-section of the bistatic radar.
Sensors 19 05341 g005
Figure 6. The value of the distribution of K 1 : (a) Three-dimensional distribution of K 1 ; (b) The isogram of the distribution of K 1 .
Figure 6. The value of the distribution of K 1 : (a) Three-dimensional distribution of K 1 ; (b) The isogram of the distribution of K 1 .
Sensors 19 05341 g006
Figure 7. The value of distribution of K 2 : (a) Three-dimensional distribution of K 2 ; (b) The isogram of the distribution of K 2 .
Figure 7. The value of distribution of K 2 : (a) Three-dimensional distribution of K 2 ; (b) The isogram of the distribution of K 2 .
Sensors 19 05341 g007
Figure 8. CFAR processor architecture.
Figure 8. CFAR processor architecture.
Sensors 19 05341 g008
Figure 9. Topology structure diagram of fusion detection: (a) Measurement fusion; (b) Decision fusion.
Figure 9. Topology structure diagram of fusion detection: (a) Measurement fusion; (b) Decision fusion.
Sensors 19 05341 g009
Figure 10. Simulation results of single base radar target echo: (a) Three-dimensional information of the echo signal; (b) Spectrum of the echo signal; (c) Echo signal processed by AMTI; (d) Spectrum of the echo signal processed by AMTI; (e) Three-dimensional of the echo signal processed by pulse compression; (f) Two-dimensional of the echo signal processed by pulse compression; (g) Three-dimensional of the echo signal after phase-coherent accumulation processing; (h) Two-dimensional of the echo signal after phase-coherent accumulation processing; (i) Result of CA-CFAR test; (j) Result of OS-CFAR test; (k) Decision result; (l) The position of target coordinate.
Figure 10. Simulation results of single base radar target echo: (a) Three-dimensional information of the echo signal; (b) Spectrum of the echo signal; (c) Echo signal processed by AMTI; (d) Spectrum of the echo signal processed by AMTI; (e) Three-dimensional of the echo signal processed by pulse compression; (f) Two-dimensional of the echo signal processed by pulse compression; (g) Three-dimensional of the echo signal after phase-coherent accumulation processing; (h) Two-dimensional of the echo signal after phase-coherent accumulation processing; (i) Result of CA-CFAR test; (j) Result of OS-CFAR test; (k) Decision result; (l) The position of target coordinate.
Sensors 19 05341 g010aSensors 19 05341 g010b
Figure 11. Simulation results of target echo of bistatic radar: (a) Three-dimensional information of the echo signal; (b) Spectrum of the echo signal; (c) Echo signal processed by MTI; (d) Spectrum of the echo signal processed by MTI; (e) Echo signal processed by AMTI; (f) Spectrum of the echo signal processed by AMTI; (g) Three-dimensional echo signal processed by pulse compression; (h) Two-dimensional echo signal processed by pulse compression; (i) Three-dimensional echo signal after phase-coherent accumulation processing; (j) Two-dimensional of the echo signal after phase-coherent accumulation processing; (k) Result of CA-CFAR test; (l) Result of OS-CFAR test; (m) Decision result; (n) The position of the target coordinate.
Figure 11. Simulation results of target echo of bistatic radar: (a) Three-dimensional information of the echo signal; (b) Spectrum of the echo signal; (c) Echo signal processed by MTI; (d) Spectrum of the echo signal processed by MTI; (e) Echo signal processed by AMTI; (f) Spectrum of the echo signal processed by AMTI; (g) Three-dimensional echo signal processed by pulse compression; (h) Two-dimensional echo signal processed by pulse compression; (i) Three-dimensional echo signal after phase-coherent accumulation processing; (j) Two-dimensional of the echo signal after phase-coherent accumulation processing; (k) Result of CA-CFAR test; (l) Result of OS-CFAR test; (m) Decision result; (n) The position of the target coordinate.
Sensors 19 05341 g011aSensors 19 05341 g011bSensors 19 05341 g011c
Figure 12. Detection performance curve of the CFAR algorithm.
Figure 12. Detection performance curve of the CFAR algorithm.
Sensors 19 05341 g012
Figure 13. Bistatic radar echo power is greater than single radar echo power: (a) Detection performance curve; (b) Difference value of the detection performance curve with the bistatic radar; (c) Difference value of the fusion detection performance curve.
Figure 13. Bistatic radar echo power is greater than single radar echo power: (a) Detection performance curve; (b) Difference value of the detection performance curve with the bistatic radar; (c) Difference value of the fusion detection performance curve.
Sensors 19 05341 g013
Figure 14. Single and double base target echo signal power is the same: (a) Detection performance curve; (b) Difference value of the detection performance curve with a bistatic radar; (c) Difference value of the fusion detection performance curve.
Figure 14. Single and double base target echo signal power is the same: (a) Detection performance curve; (b) Difference value of the detection performance curve with a bistatic radar; (c) Difference value of the fusion detection performance curve.
Sensors 19 05341 g014
Figure 15. Single radar echo power is greater than bistatic radar echo power: (a) Detection performance curve; (b) Difference value of the detection performance curve with a single radar; (c) Difference value of the fusion detection performance curve.
Figure 15. Single radar echo power is greater than bistatic radar echo power: (a) Detection performance curve; (b) Difference value of the detection performance curve with a single radar; (c) Difference value of the fusion detection performance curve.
Sensors 19 05341 g015
Table 1. The notation of the symbols.
Table 1. The notation of the symbols.
NotationNotes
x Scalar
x Vector
X Matrix
( ) 1 Inverse of matrix
|     | Absolute value
rect ( ) Rectangle function
Table 2. Total computational complexity.
Table 2. Total computational complexity.
Fusion CFARCA-CFAROS-CFAR
Measurement fusion O ( n l ( 2 M + M N ) ) O ( n l ( 2 M + M N 2 ) )
Decision fusion O ( n l ( 3 M + 2 M N ) ) O ( n l ( 3 M + 2 M N 2 ) )
Table 3. Settings of simulation parameters.
Table 3. Settings of simulation parameters.
Parameter DescriptionParametersValue
Transmitter coordinates ( x t , y t , z t ) (0,0,8) km
Receiver coordinates ( x r , y r , z r ) (80,20,8) km
Target coordinates ( x , y , z ) (110,40,2) km
Transmitter velocity vector V t (100,10,0) m/s
Receiver velocity vector V r (100,10,0) m/s
Target velocity vector V tg (−100,−50,0) m/s
Transmitting signal carrier frequency f c 1 GHz
Transmission time width T p 30 us
Transmitted signal bandwidth B 1 MHz
Pulse repetition frequency f r 3000 Hz
Pulse number M 64
Sampling frequency f s 2 MHz
Signal-to-noise ratio SNR 15 dB
Signal-to-clutter ratio SCR −35 dB
Signal/direct wave power ratio K 2 −15 dB
Four-pulse cancellation coefficient of MTI h [1 −3 3 −1]
Number of reference units N 48
Order values of OS-CFAR k 18
False alarm probability p f 10−6

Share and Cite

MDPI and ACS Style

Wang, H.; Tang, Z.; Zhao, Y.; Chen, Y.; Zhu, Z.; Zhang, Y. Signal Processing and Target Fusion Detection via Dual Platform Radar Cooperative Illumination. Sensors 2019, 19, 5341. https://doi.org/10.3390/s19245341

AMA Style

Wang H, Tang Z, Zhao Y, Chen Y, Zhu Z, Zhang Y. Signal Processing and Target Fusion Detection via Dual Platform Radar Cooperative Illumination. Sensors. 2019; 19(24):5341. https://doi.org/10.3390/s19245341

Chicago/Turabian Style

Wang, HuiJuan, ZiYue Tang, YuanQing Zhao, YiChang Chen, ZhenBo Zhu, and YuanPeng Zhang. 2019. "Signal Processing and Target Fusion Detection via Dual Platform Radar Cooperative Illumination" Sensors 19, no. 24: 5341. https://doi.org/10.3390/s19245341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop