Next Article in Journal
Correction: Ramillien et al. An Innovative Slepian Approach to Invert GRACE KBRR for Localized Hydrological Information at the Sub-Basin Scale. Remote Sens. 2021, 13, 1824
Previous Article in Journal
Satellite Remote Sensing Identification of Discolored Standing Trees for Pine Wilt Disease Based on Semi-Supervised Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Marine Extended Target Tracking for Scanning Radar Data Using Correlation Filter and Bayes Filter Jointly

School of Information Science and Technology, University of Science and Technology of China, Hefei 230022, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(23), 5937; https://doi.org/10.3390/rs14235937
Submission received: 7 November 2022 / Revised: 21 November 2022 / Accepted: 21 November 2022 / Published: 23 November 2022
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
As the radar resolution improves, the extended structure of the targets in radar echoes can make a significant contribution to improving tracking performance, hence specific trackers need to be designed for these targets. However, traditional radar target tracking methods are mainly based on the accumulation of the target’s motion information, and the target’s appearance information is ignored. In this paper, a novel tracking algorithm that exploits both the appearance and motion information of a target is proposed to track a single extended target in maritime surveillance scenarios by incorporating the Bayesian motion state filter and the correlation appearance filter. The proposed algorithm consists of three modules. Firstly, a Bayesian module is utilized to accumulate the motion information of the target. Secondly, a correlation module is performed to capture the appearance features of the target. Finally, a fusion module is proposed to integrate the results of the former two modules according to the Maximum A Posteriori Criterion. In addition, a feedback structure is proposed to transfer the fusion results back to the former two modules to improve their stability. Besides, a scale adaptive strategy is presented to improve the tracker’s ability to cope with targets with varying shapes. In the end, the effectiveness of the proposed method is verified by measured radar data. The experimental results demonstrate that the proposed method achieves superior performance compared with other traditional algorithms, which simply focus on the target’s motion information. Moreover, this method is robust under complicated scenarios, such as clutter interference, target shape changing, and low signal-to-noise ratio (SNR).

1. Introduction

Radar extended target tracking technology is widely used in military and civilian fields, e.g., maritime target tracking. Traditional target tracking methods perform well based on the point target hypothesis, which is appropriate, as each target produces only one measurement per frame. Tracking filters are used to smooth the measured data during the scans, e.g., Kalman filter (KF) provides an optimal estimation for linear systems based on Gaussian and Markov assumptions [1]. Subsequently, for non-linear systems, several KF variants are proposed under the Gaussian assumption, including the Extended KF (EKF) [2] and the Unscented KF (UKF) [3]. Among them, the EKF ignores high order terms of the system’s state by linear approximation. The UKF reduces the estimation error by an unscented transform, which can effectively cope with highly non-linear environments. These methods use the accumulated detector data as input with the aim of iteratively calculating the posterior probability density function (PDF) of the target’s kinematic state. These techniques can be described within unified Bayes filter (BF) formalism [4]. However, traditional trackers are not suitable for an extended target, which may produce multiple measurements with noise, since the measurements of ship targets captured by the high-resolution radar working at scanning mode usually produce two-dimensional expansion in range and azimuth. The higher the radar’s range resolution, the more likely the extended target will generate measurements occupying multiple range resolution cells. In addition, if the radar working in scanning mode, the target may occupy multiple azimuth cells. Additionally, the resolution cell occupied by the target’s scattering point often changes due to the radar resolution, the reflection strength of the target or the change of the target’s attitude, which leads to the shape of some targets’ scattering point varying.
Recently, many methods are proposed for extended target tracking [5,6,7,8,9]. For example, Koch [5,6] utilized the random matrix model to describe the extension of the target appearance, and derived the analytical solutions under the Bayesian framework. Granstrom [8] proposed the extended target-Gaussian inverse Wishart-probability hypothesis density (ET-GIW-PHD) filter based on the RM (Random Matrices) model under the PHD framework. In addition, Granstrom [9] estimated the target’s motion state and shape by calculating the Gamma Gaussian inverse Wishart (GGIW) distribution of each target recursively. Marcus et al. [10,11] proposed Random Hypersurface Models to model extended targets. Random Hypersurface Models describe the target’s profile in terms of a parametric set of geometric shape functions, e.g., the Gaussian process model is one typical model [12]. Since each target generates multiple measurements, it is necessary to divide the measurements of the target. The algorithms mentioned above ensure reliability by traversing all divided measurements. However, the effect of measurement division is unsatisfactory when faced with the following challenging situations: overtaking, head-on meeting, and interruption clutter. Especially in maritime, strong sea clutter could interfere with the division of measurements. The clutter classified into the target cell always causes the association errors in the above methods, which results in severe degradation of the tracking performance. Although the extended target tracking methods mentioned above can estimate the shape of the target, the tracker only uses the motion information of the targets. In radar target tracking, the tracker usually only utilizes the position observation provided by the detector because the radar only captures one or more scatter points of the target. Intuitively, taking into account more available information would greatly improve the stability and accuracy of the tracker.
In addition to motion information, radar extended targets can also provide appearance information. Inspired by human visual instincts, some scholars introduce the amplitude information of the target into tracking methods which not only use the motion information but also take advantage of the appearance information of targets. Shui et al. [13] proposed a new approach to exploit three features of the received time series for detection of floating small targets in sea clutter. The difference in the statistical characteristics of amplitude between targets and clutters are used to improve the discrimination of them [14,15,16,17]. However, these feature-aided tracking methods need to extract features manually, and it is difficult to find a stable feature suitable for various complex maritime scenes.
The deep learning method, which has achieved remarkable results in computer vision, are also used to extract the features of radar targets by many scholars. Chen et al. [18] proposed to classify marine target and sea clutter by convolutional neural networks, which suggested that the above statistical learning methods are suitable for marine target detection and tracking, and the appearance of an extended target can provide useful features for detection. To achieve full utilization of the information contained in radar signals and improve the detection performance, Su et al. [19] utilized the graph convolutional network to detect marine radar targets. The above deep learning methods had achieved a very appealing performance on the real datasets. However, these methods require a large amount of data for offline training.
The kernel correlation filter (KCF) is an online learning method widely used in visual tracking [20], which can track the target in real time if given the training samples in the first frame. KCF [20] trains the filter based on the appearance information of the target in the current frame and the previous frame, and then performs correlation calculation with a new frame to track the appearance of the target. In order to handle the scale changing of the target, Danelljan et al. [21] proposed to learn discriminative correlation filters from a scale pyramid model, and Li et al. [22] gave a multiple scales searching strategy. Zhou et al. [23] introduced the correlation filtering method into radar target tracking for the first time, and proposed the Multiple Kernelized Correction Filter (MKCF) from the perspective of multi-kernel fusion, which performed satisfactorily in some challenging situations. It indicated that this correlation filter (CF) method is promising in radar target tracking [23]. Introducing the CF method into the traditional radar target tracking framework will help to improve the radar target tracking performance. However, most of these methods [23,24] treat the CF as a detector but not a tracker, which is not the best way to combine CF and the traditional radar target tracking framework.
To sum up, many existing methods for radar marine extended target tracking have been proven to be effective, but some of them suffer from poor robustness and incomplete use of observation information. Therefore, this paper attempts to make full use of the limited observation information of the radar targets adaptively. A new algorithm called JCBF is proposed to track a single maritime radar extended target utilizing both the motion and appearance information by jointing correlation filter and Bayes filter (JCBF). The proposed algorithm consists of three main parts: (1) Bayes filter (BF) is used to estimate the target’s motion state; (2) CF calculates the likelihood PDF of the target’s position using the real-time appearance features from the perspective of signal estimation; and (3) the final estimation of the target’s state is obtained by fusing the results of both filters from maximum a posteriori estimation criterion. In addition, a specific scaling pool for radar signals is also proposed to accommodate the extensions in distance and azimuth of targets. Finally, the experiments are performed on the scanning data collected by X-band marine radar to verify the proposed algorithm.
The main contributions of this paper are as follows. Firstly, a novel and specific tracker is designed for the radar target by jointing the BF and CF from an a posteriori probability perspective. Secondly, a strategy that can adapt to the target’s shape is employed to track the targets with varying shape. Finally, the experiments on real data sets demonstrate that the proposed method improves the stability of the tracker under low signal-to-clutter/noise ratio environments. Table 1 gives a list of notations used throughout the paper.

2. Proposed JCBF

This section begins with a brief introduction to the basic principles of BF [4] and KCF [20]. Next, the theoretical derivation and a specific implementation of the integrated BF and CF approach are given in Section 2.3 and Section 2.4, respectively, where the viewpoint in [23] that analyzing the KCF in the perspective of probability is highlighted. Following that, two strategies for improving the robustness of the tracker are given in Section 2.5 where a feedback structure is suggested to pass the fusion outcomes back to the former BF and KCF to improve their stability, and a scale adaptive strategy is presented to ameliorate the robustness of the tracker against shape-changing targets. Finally, the proposed method is summarized in Section 2.6.

2.1. Bayesian Framework

Assuming that the state equation and the observation equation of the system (linear or nonlinear system) are as follows:
x k + 1 = f k ( x k ) + u k ,
z k = h k ( x k ) + n k ,
where k N denotes the time step, x k R n is the system state vector at time k, f k : R n R n denotes the state transition map, u k is the process noise, z k R m is the measurement vector at time k, h k : R n R m denotes the measurement map ( n > m ), and n k is the measurement noise. Note that f k and h k can denote both linear and nonlinear maps.
The Bayes filter uses the measurements set Z k ( z 1 , z 2 , , z k ) to calculate the PDF of the state x k at time k, i.e., p x k | Z k , k N . The steps of the Bayes filter are divided into prediction and update, as follows:
  • Assuming that p x k 1 | Z k 1 has been obtained at time k − 1, then the PDF of the one-step predictions of the state and measurement are (3) and (4), respectively:
    p x k | Z k 1 = R n p x k | x k 1 p x k 1 | Z k 1 d x k 1 ,
    p z k | Z k 1 = R n p z k | x k p x k | Z k 1 d x k ,
    where p x k | x k 1 and p ( z k | x k ) are given by (1) and (2), respectively.
  • The posterior PDF is calculated using the new measurement z k obtained at time k [4]:
    p x k | Z k = p z k | x k p x k | Z k 1 p z k | Z k 1 .

2.2. Kernal Correlation Filter

KCF [20] solves the classifier by means of ridge regression. In the initial linear regression model, the optimization objective function is (6). Given the training feature-label pairs { g i , y i } , the loss function is as in (7):
f ( g i ) = w T g i ,
min w i ( f ( g i ) y i ) 2 + λ w 2 .
Assuming that G = [ g 1 , g 2 , , g n ] T denotes n training samples and y = [ y 1 , , y n ] T denotes their labels, it can be derived that w has a closed-form solution w = ( G H G + λ I ) 1 G H y .
To improve the classifier’s performance, the data in original space are mapped to a high-dimensional space with weights w as linear combinations of the input samples:
w = i α i φ ( g i ) .
When putting the w in the optimization process, the detailed form of function φ ( · ) is not necessary, only the inner product of φ ( · ) is needed, i.e., k g i , g j = φ g i , φ g j .
By introducing the kernel function k ( g i , g j ) = exp 1 σ k 2 g i 2 + g j 2 2 g i T g j , and putting it and weights (8) into (7), the optimal solution can be converted to (9):
α = ( K + λ I ) 1 y ,
where α = [ α 1 , , α n ] T , K is a circular matrix with n × n dimensions, and K i j = k ( g i , g j ) .
Since K is a circular matrix, (9) can be converted into α = F 1 F ( y ) F ( k g g ) + λ , where k g g denotes the first row of the circular matrix K [20].
After training the model using sample g in the last frame, the response of the new input sample g′ in the new frame is computed by:
Y = F 1 F k g g F α .
Let v be the position variable of the target at a certain time step, the position v of the target is the position of the maximum value in the response Y ( v ) , which can be given by:
v ^ = arg max v Y ( v )
The model is updated by (12), where g t and α t denote the training sample and the weights at time t, g t and α g t denote the new sample and the new weights, and η ( 0 , 1 ) denotes forgetting rate [23].
g t = g t × η + g t 1 × ( 1 η ) α t = α g t × η + α t 1 × ( 1 η )

2.3. Joint Correlation and Bayes Filtering

In order to make full use of the target’s appearance and motion information to estimate the target’s position v k R m , the posterior PDF p v k | Z k , A k of v k at time k is calculated utilizing both the obtained position measurement z k and the appearance measurement a k , where A k ( a 1 , a 2 , , a k ) denotes the appearance measurements set.
Further discussion will be much simplified if additionally assuming that the position of the target and the appearance feature of the target are independent of each other. Such an assumption seems to be justified in many practical cases, as follows:
p Z k , A k | v k = p Z k | v k p A k | v k ,
p Z k , A k = p Z k p A k .
Therefore, the posterior PDF of the state v k is given by:
p v k | Z k , A k = p Z k , A k | v k p v k p Z k , A k = p Z k | v k p A k | v k p v k p Z k p A k = p v k | Z k p A k | v k p A k p v k | Z k p A k | v k .
Zhou et al. [23] firstly analyze the KCF in the perspective of maximum likelihood. They propose that the response (10) can be viewed as the more advanced likelihood function whose strength shows the probability of the target at each position v . Hence, given a position v k R m , the probability of v k being the center of the target measured by KCF is the individual likelihood p A k | v k , where the A k denotes the appearance measurements obtained by KCF. Treating the response of KCF as an appearance measurement captured by one sensor, KCF can be considered as a filter that utilizes the target’s real-time appearance measurements A k to calculate the likelihood PDF of the target’s position v k .
In KCF, the response matrix is equal to the individual likelihood distribution of v k [23], thus p A k | v k in (15) can be calculated by KCF:
p A k | v k = Y v
As discussed in Section 2.1, BF calculates the posterior PDF p x k | Z k of x k R n using the obtained measurement z k R m and p x k 1 | Z k 1 .
The state estimation v k R m of the target’s position is a fraction of x k , thus x k can be decomposed into two components as x k = v k , e k T , where e k R n m . As a result, p v k | Z k is given by (17):
p v k | Z k = p v k , e k | Z k d e k = p x k | Z k d e k .
Finally, p v k | Z k , A k can be calculated by (15)–(17). As a result, the maximum a posteriori estimation (MAP) of v k is given by (18):
v ^ k = arg max v k p ( v k | Z k , A k ) = arg max v k p ( v k | Z k ) p ( A k | v k ) .
Not only the appearance information but also the motion information is successfully utilized in (18) to estimate the target’s position v k by jointing the Bayes filter and the Correlation filter. The overall flow of JCBF is shown in Figure 1, among which a fusion module (Figure 1c) is used to combine the BF module (Figure 1a) and the CF module Figure 1b.

2.4. Specific Implementation of JCBF

This section gives a specific implementation of JCBF. Since radar measurements are generally obtained from the distance and bearing of the target in the polar coordinate system, this paper constructs the equations of state and observation in the polar coordinate system and implements the proposed tracking algorithm.

2.4.1. Bayes Filter Module

In the polar coordinate, the motion state of the target can be represented by x k = r , θ , ν r , ν θ T , where r denotes the radial range between the target and the radar, θ denotes the azimuth, ν r denotes the radial velocity and ν θ denotes the tangential velocity. The measurement can be represented by z k = r , θ T .
Due to the weak manoeuvrability of most maritime targets, for the sake of simplicity, this paper assumes that the target’s motion model conforms to a Nearly Constant Velocity (NCV) motion model in the Cartesian coordinate. Assuming that the radar’s scanning period is T, the above motion model and the observation equation in the polar coordinate system can be represented as follows [25]:
x k + 1 = f k ( x k ) + u k ,
z k = H k x k + n k ,
where
f k · = r cos θ + ( ν r cos θ ν θ sin θ ) T 2 + r sin θ + ( ν r sin θ + ν θ cos θ ) T 2 arctan r sin θ + ( ν r sin θ + ν θ cos θ ) T r cos θ + ( ν r cos θ ν θ sin θ ) T ( ν r sin θ + ν θ cos θ ) sin θ + ( ν r cos θ ν θ sin θ ) cos θ ( ν r sin θ + ν θ cos θ ) cos θ ( ν r cos θ ν θ sin θ ) sin θ ,
H k = 1 0 0 0 0 1 0 0 .
Since the motion Equation (19) is a nonlinear model, this paper utilizes EKF [2] to estimate the state x k . The detailed calculation process of EKF and the derivation of (21) are displayed in Appendix A and Appendix B, respectively.
Given the measurements Z k , EKF iteratively calculates the first moment and second moment of x k R n , and approximates the posterior PDF of x k to a Gaussian distribution:
p x k | Z k = R x k ; m x , x .
where m x denotes the mean of x k and x denotes the covariance matrix. The expressions for x k and x are displayed in Appendix A.
As discussed in Section 2.3, the key is to solve for the posterior PDF of v k as (15). In the motion model (19), v k denotes the range and azimuth where the target is located, i.e., v k = r , θ T . Consequently, x k = r , θ , ν r , ν θ T can be decomposed into two components as x k = v k , e k T , where e k = ν r , ν θ T . Plugging (23) into (17), the posterior PDF of the target’s position v k can be derived as:
p v k | Z k = N x k ; m x , x d e k = 1 2 π det x exp 1 2 v k , e k T m x T x 1 v k , e k T m x d e k = 1 2 π det 1 exp 1 2 v k m 1 T 1 1 v k m 1 = N v k ; m 1 , 1 ,
where m 1 = H m x , 1 = H x H T and H = I 2 , O 2 .

2.4.2. Correlation Filter Module

The training labels of KCF are two-dimensional (2D) needle-shaped Gaussian [20,23], if the test sample is similar with the train samples, the response f ( a k ) of KCF given by () is a similar needle-shaped Gaussian to the training labels. Thus, the position v k can be assumed to obey 2d Gaussian distribution, and such an assumption simplifies the further discussion and is reasonable in most cases. As discussed in Section 2.3, given the A k , the likelihood PDF of position variable v k is computed by:
p A k | v k = N v k ; m 2 , 2 ,
where m 2 = μ and 2 = σ 2 I m . For KCF, μ denotes the location of the maximum value y p in the response map, and σ 2 is inversely proportional to the y p , i.e., σ 2 = 1 2 π y p .

2.4.3. Fusion Module

This module calculates the maximum posterior estimate of the target location by fusing the results of the above two modules.
p v k | Z k p A k | v k is given by:
p v k | Z k p A k | v k = N v k ; m 1 , 1 N v k ; m 2 , 2 = C c N v k ; m c , c ,
where
c = 1 1 + 2 1 1 ,
m c = c 1 1 m 1 + 2 1 m 2 ,
C c = 1 2 π det 1 + 2 exp 1 2 m 1 m 2 T 1 + 2 1 m 1 m 2 .
According to (18), the maximum a posterior estimation of v k can be derived as (30) and the covariance matrix of estimation errors is c .
v ^ k = arg max v k p ( v k | Z k ) p ( A k | v k ) = arg max v k C c N v k ; m c , c = m c .

2.5. Strategies to Improve the Stability of Tracker

2.5.1. A Feedback Structure in the JCBF

A feedback module is introduced into the proposed method to improve the stability. The fused results given by JCBF are treated as corrected measurements to re-update BF and CF. The green lines in Figure 1 illustrate this process.
After capturing the target at time k, the correlation filter usually updates the model with new sample, as shown in (12). However, when the target is obscured by clutter or changes significantly in appearance, as shown in Figure 2a, the response of the filter deteriorates significantly, which leads to inaccurate estimation of the target state. The peak to side-lobe ratio (PSR) [26] and average peak-to-correlation energy (APCE) [27] are utilized to evaluate the fluctuated degree of response y and the confidence level of the detected targets, which are given by (31) and (32), respectively,
P S R = y m a x μ s σ s ,
A P C E = y max y min 2 m e a n w , h y w , h y min 2 ,
where μ s and σ s are the mean and standard deviation of the side-lobe, respectively, and ∑ denotes summation.
In this strategy, the reselected sample extracted from the estimated position given by the JCBF is utilized to update the CF rather than that form the original position given by the CF.
Similarly, Bayes filter updates the model with new measurements, as in (5). When the target passes through interference clutter or meets other targets, the measurements could be disturbed, as shown in Figure 2b. To solve this problem, v ^ k is treated as a measurement and is fed back to the BF module together with the covariance matrix c of estimation errors to update the BF.
In addition, a confidence threshold t h is set to determine whether to update the filter or not. If the PSR is less than the threshold, both filters will not be updated and the estimation for this frame will be replaced by the prediction of the Kalman filter.

2.5.2. A Scale Adaptive Strategy

The traditional Bayes methods [5,6,7,8,9] obtain an extended estimate of the target by dividing multiple measurements of the target; however, contaminated measurements may degrade the performance of these methods. Without relying on measurements, existing scale-adaptive CF methods [21,22] proposed for optical images calculate the response of a sample scaled at multiple scales by defining a scaling pool S = t i i = 1 : l = { t 1 , t 2 , , t l } . For the current frame, the template size is fixed as s = ( w , h ) and z with sizes in { s · t i } is sampled to find the proper target. The scale of the sample with the largest response is considered as the target’s size. These methods don’t distort the appearance of the target by scaling both the length and width at the same ratio, for the reason that the optical images have the same resolution in both dimensions. However, the resolutions of radar in two dimensions are different, where the azimuthal resolution is determined by the beamwidth of the antenna, and the distance resolution is determined by the bandwidth of the radar’s emitting signal. Therefore, scaling the sample using above ratios would distort the radar targets. In addition, the clutter may be introduced into the sample z when the size expands. In summary, the method mentioned above with superior performance in optical images is not suitable for radar targets.
Unlike the targets in optical images, the expansion of radar targets in azimuth becomes more evident as it approaches closer to the radar, but not in distance, due to the lack of radar’s azimuthal resolution and the variation in radar’s view. Consequently, a scaling pool (33) for radar targets is proposed which scales the length and width of the template at different ratios, where width is scaled with a greater gradient than length. The gradient of the scaling pool is defined as Δ S = Δ t w , Δ t h T , and l denotes the length of scaling pool.
S = t i i = 1 : l = t w 1 t h 1 , t w 2 t h 2 , , t w l t h l .
The further operations are the same as in [22], except the proposed scaling pool. The scaling size are given by (34):
t = arg max t i f ( z t i ) .
where z t i denotes the sample z with scaling size t i .
In the end, the response of KCF with scaling size t will be used in JCBF.

2.6. Framework of the Proposed JCBF

The proposed method is summarized in Algorithm 1. Given the position and size of the target in the initial frame, firstly, g is sampled to train KCF using (). In the latter frame, the response of the target is given by (10) and the p A k | v k is calculated by (25). It is worth noting that the scale adaptive strategy was only used to track the targets with varying shape. Meanwhile, a segmentation algorithm [23] (’detecor’ in Algorithm 1) is used to obtain measurements z′ of the target. Next, the EKF module is used to calculate the p v k | Z k . Then, the results of the two filters are fused to obtain the state estimation v ^ of the target, as in (30). In the end, PSR [23,27] is utilized to determine whether to update the filter or not.
Algorithm 1: Joint correlation filter and Bayes filter.
Input
  • v 0 : the position of the target in the initial frame;
  • b 0 : the bounding box of the target in the initial frame;
  • G: data matrix;
  • N: total frames;
  • S: scaling pool.
Output
  • v ^ : the position of the target in latter frame.
1: for k = 1 to N
2:  if k initial frame
3:   Sampling g at position v 0 in size of b 0
4:   Generating Gaussian needle-shaped label y
5:   train: α = F 1 F ( y ) F ( k g g ) + λ
6:    v ^ = v 0
7:  end if
8:  if k ≠ initial frame then
9:   if step KCF then
10:    for  t i in S do
11:     Sampling g′ (denotes a k at position v ^ ) in size of b 0 · t i
12:     test: f ( g t i ) = F 1 F k g g F α
13:    end for
14:     t = arg max t i f ( g t i )
15:    The response of KCF is f ( g t )
16:    Calculating p A k | v k by (25)
17:   end if
18:   if step EKF then
19:     z k = d e t e c t o r ( g )
20:     Calculating p ( v k | Z k ) by (24)
21:    end if
22:    if step Fusion then
23:    Calculating the estimation v ^ by (30)
24:    Calculating the covariance matrix c of estimation errors by (27)
25:    if psr > th then
26:     Updating KCF by (12)
27:     Updating EKF using v ^ and c
28:    end if
29:   end if
30:  end if
31: end for

2.7. Parameters Setup

Table 2 lists the value of key parameters of JCBF, which are set depending on the dataset used for the experiments. A detailed description of the dataset used in the experiments is presented in Section 3.1.

2.8. Evaluation Methodology

The experiments in this paper are performed using the data matrices of radar echoes. The proposed JCBF and several selected trackers are used to track each target separately. The results of each tracker are the center location and the bounding box of the targets. The targets in the radar monitoring scenario are non-cooperative, so the real motion information and the real labels of the targets can’t be obtained. In addition, the experimental results are compared with the manually generated labels to evaluate the trackers’ performance.
Average center location error (CLE) and bounding box overlap ratio are calculated to evaluate the difference between the results and the ground truth. For the performance criteria, the precision curves widely applied in the visual tracking [20,23,28] are also utilized to simply show the percentage of correctly tracked frames. Targets are considered to be tracked correctly if the predicted target center is within a distance threshold of ground truth.
The location precision and overlap precision are given by (35) and (36), respectively:
L P ( G , T , t h l p ) = s u m ( d i s t ( G , T ) t h l p ) l e n ( G ) ,
O P ( G , T , t h o p ) = s u m ( o v e r l a p ( G , T ) t h o p ) l e n ( G ) ,
where G and T denote ground truth set and tracking results set, respectively. d i s t ( · ) calculates the distance between the center of ground truth and the center of tracking results, and o v e r l a p ( · ) calculates the intersection-over-union (IOU) between the ground truth and the tracking results. t h l p [ 0 , 50 ] and t h o p [ 0 , 1 ] denote the location error threshold and overlap threshold, respectively.
Three traditional Bayes tracking methods (EKF [2], RM-based trackers [5], GGIW-based trackers [9]) and two correlation filter-based methods (KCF [20], Scale Adaptive with Multiple Features tracker (ASKCF) [21]) are chosen to compare with JCBF. The same measurements are provided for three Bayes methods and the proposed JCBF. In addition, the parameters of the two correlation filters are the same as those of the proposed JCBF.
It is worth noting that KF has been used for smoothing the measurements in both RM-based trackers and GGIW-based trackers, the KF parameters of the two trackers are the same as those of JCBF. To show the performance improvement of the JCBF, the EKF supplemented by the nearest neighbor data correlation is also tested on the same dataset. Furthermore, the IOU is calculated based on the target size on the first frame, as the EKF cannot estimate the size of targets based on the point target hypothesis.

3. Experiment and Results Analysis

3.1. Data Sets Analysis

Experimental data sets were provided by Naval Aviation University [29,30] and collected on 22 July 2020. Overall, 136 frames of scanning data were collected by an X-band marine radar near the Yantai harbor, Yantai, China. The radar scan speed is 24 rpm and emits rectangular pulses with a PRF of 1.6 kHz. A frame of the radar image in the original data is formatted with 2224 cells in range and 1729 cells in azimuth. The detailed parameters of the experimental radar are displayed in Table 3.
Moreover, some of the azimuth cell data in the original data are duplicated or missing, thus the data need to be corrected, and the size of the corrected data is 2224 × 2078 (range cells × azimuth cells).
There are twelve targets in the monitoring scene, whose manually labeled real trajectories are shown in Figure 3. The radar is interfered by some unknown objects during the scanning process, resulting in the blocking of the 571st to 716th azimuth cells, which are marked by the white dotted boxes in Figure 3. The trajectories of the targets passing through this area are divided into two segments because there is no useful information in the radar echoes of the targets as they pass through these areas, e.g., target 1 (target 9) and target 6 (target 10).

3.2. Performance Analysis

The tracking trajectories of twelve targets from the proposed method are shown in Figure Figure 4.

3.2.1. Time Costing

Above experiments are performed on a workstation with the i5-10500 3.10 GHz CPU and 24 GB RAM. The costing time per target per frame is recorded in Table 4. For every target, the proposed method costs 0.275 s per frame, which costs more time than others for the reason that two filters are utilized in JCBF.

3.2.2. Error and Precision

Table 5, Table 6 and Table 7 compare the center location error, location precision and overlap precision of the results obtained from six trackers, respectively. Additionally, the average location precision curves and average overlap precision curves are displayed in Figure 5a,b, respectively, which present the results more visually. The larger values of the location precision and the overlap precision mean the more accurate result.
As can be seen from Table 5 and Table 6, the proposed JCBF with the lowest average location error and highest average location precision can accurately capture the center of the target. It can be seen from the results in Figure 5 and Table 6Table 7 that the proposed method performs better than others in estimating the size of the target.
GGIW and RM are easy to misclassify measurements when the shape of the target changes or the target is disturbed by clutter. In addition, the scaling pool used by ASKCF is not suitable for radar targets as discussed in Section 2.5.2, which results in location and shape estimation errors. Moreover, the scaling pool used in ASKCF is easier to introduce the clutter into the sampling area. Furthermore, the difference between the target and clutter is most reflected in the phase of radar’s echo, but not the amplitude of the radar’s echo. Therefore, the clutter is easy to be regarded as the target by ASKCF, which will lead to the decline in the performance. Therefore, ASKCF is not suitable for radar targets. It’s noting that although the O P of KCF and EKF is shown to demonstrate the performance improvement of the JCBF relative to them, it does not mean that they have the ability to estimate the size of targets.
Comparing the above results, it can be seen that the proposed JCBF has the significantly improved performance over other tracking algorithms. The reason is that the proposed JCBF exploits both the motion and appearance features of the target, which improves the algorithm’s ability to distinguish between targets and clutter. Moreover, the feedback strategy improves the stability of the trackers and the proposed scaling pool is more suitable for radar targets.

3.3. Tracking Results Analysis

Table 8 lists the challenging situations in the date sets. Some typical targets can be seen in Figure 6 are selected to visually show these situations, where the white boxes in figures indicate the position of the target and the white arrows indicate the direction of the target movement. When the maritime radar is working at scanning mode, target 1 produces different azimuthal expansion at each time due to radar’s beam width, as shown in Figure 6a–d. Target 2 is obscured (see Figure 6e–g) or interfered with (see Figure 6h) by ground clutter. Furthermore, target 12 is a weak target with low SNR at most frames, whose SNR is statistically 7.3 dB. The top four best performing algorithms (JCBF, EKF, KCF, GGIW) in Section 3.2 are selected for detailed analysis.

3.3.1. Shape or Appearance Changing

Figure 7 shows the tracking bounding boxes of target 1 given by JCBF and those of others in some key frames, where red boxes denote the JCBF, pink boxes denote the KCF, yellow boxes denote the EKF, and blue boxes denote the GGIW. Missing trackers are denoted by an ’×’.
Due to azimuthal scaling, the target tends to produce multiple measurements, which can easily lead to association errors during tracking. It can be seen from the results in Figure 7 that EKF estimates the position of the target incorrectly and gradually loses it when the target’s shape changes, because EKF only selects the measurement closest to the prediction. In addition, the measurements are so dispersed in azimuth, which causes the GGIW clusters incorrectly. KCF tracks reasonably well in the first few frames when target scales not obviously, but loses the target in both Figure 7e,f because the template size cannot change with the scaling of the target. In comparison, the proposed JCBF can capture the changing of the target’s size precisely.
In order to illustrate the advantages of the proposed algorithm in dealing with targets’ deformation more directly, the response maps of KCF and JCBF at frame 23 are compared in Figure 8. It can be seen that the response map of KCF fluctuates intensely because the target has multiple strong scattering points, while that of JCBF has high-confidence due to the high PSR.

3.3.2. Targets Interferenced by Clutter

Figure 9a–c compare the results of target 2 given by all trackers when the target is briefly obscured by clutter. Figure 9d–f show the results of target 2 under the circumstance that it approaches land at high speed and is interferenced by ground clutter.
It is seen from Figure 10 that the false sharp peak of the response map of KCF is within the clutter region, because the target is obscured by a stationary clutter from frame 94 to frame 96, which results in an incorrect position of KCF, as shown in Figure 9b,c. However, the target’s motion model is not affected by the clutter and when the target reappeared, the EKF module in the proposed JCBF can correctly track the target using real-time measurements, as shown in Figure 9c.
After frame 120, target 2 maneuvers closer to land and swerves quickly, whose motion model is not conforming to the preset uniform linear motion model and measurements are interferenced by clutter. It causes the wrong association and results in traditional trackers (EKF, GGIW) drifting, as shown in Figure 9d–f. However, response maps with high confidence are still obtained in JCBF, as the appearance of the target remains intact, as shown in Figure 11. As a result, the proposed JCBF is precise on locating the centroid of the target.

3.3.3. Weak Targets

Figure 12a–c give the results of target 12 given by all trackers. Under a low signal-to-clutter/noise ratio (SCR/SNR) environment, the existing detectors have poor detection performance and cannot provide effective measurements for trackers, which results in tracking errors of traditional tracking-by-detection tracker (EKF, GGIW). As Figure 12 shown, EKF is prone to produce location errors and GGIW incorrectly estimates the target’s size due to the obtained valid measurements. However, as displayed in Figure 12d–f, the JCBF is still able to accurately capture target 12 because of the high-confidence responses of the KCF module. Moreover, the feedback strategy continuously feeds repositioned high-confidence measurements and samples to the EKF and KCF modules, which improves the stability of the proposed JCBF at low SNR.
According to the above analysis, the performance of the JCBF is significantly better than the traditional methods since both the appearance information and the motion information of the target are utilized.

4. Discussion

4.1. Associations with Previous Research

Traditional radar target tracking algorithms rely on temporal evolution models and observation models, using the accumulated detector measurement as input with the aim of iteratively calculating the posterior PDF of the target’s kinematic state. These techniques can be described within unified Bayesian formalism [4]. With the improvement of modern radar resolution, the extended characteristics of targets have received more attention, while a number of extended target tracking algorithms within the Bayes framework have been proposed [5].
In the field of visual target tracking, the specificity of the target’s appearance features make it important for specific target discrimination. At the same time, the complexity of appearance features also make them intractable to model statistically, leading to a range of data-driven learning-based algorithms [18,20]. These algorithms allow for the efficient extraction and characterization of the target’s appearance information, but also directly lead to the representation of the appearance features being hard to include in the unified Bayesian formalism.
This paper proposes a method that incorporates the output of CF into a BF formalism, thus providing a scheme combining the appearance information extracted by learning-based algorithms with the motion information of the target, which is derived from the idea of regarding the output of the CF as a PDF of the target position in [23]. Through this, the proposed method would combine the specificity of the appearance representation provided by learning-based algorithms with the memorability of motion properties offered by Bayesian approach. The experimental results demonstrate the effectiveness of the proposed method, which has significant advantages over the traditional BF and CF in scenarios of clutter interference, target’s shape changing, and low SNR.
Based on the unity of the Bayesian formalism, it is reasonable to believe that most tracking algorithms which can be described within Bayesian formalism have the potential to be combined with CF using the method proposed in this paper.

4.2. Suggestions for Future Work

On the one hand, the CF relies on labeled data for initialization, which makes the proposed method’s automatic initiation become problematic. It is essential to address the tracker’s automatic initiation. On the other hand, the combination of other data-driven algorithms with Bayesian formalism and the extension of this method to multi-target scenarios are also subject to further research.

5. Conclusions

This paper proposes a new tracking framework based on correlation filter and Bayes filter for marine radar single-target tracking. The proposed JCBF is able to track the motion state and appearance of the target simultaneously by fusing the correlation filter and Bayes filter. Taking into account more available information of the radar target, the proposed method maintains robustness in the complex environment. The feedback structure improves the tracker’s stability and the scale adaptive strategy enables the tracker to adapt to the targets’ shape changing. Experiments on X-band marine radar data demonstrate that the JCBF performs better than traditional methods in some complex situations such as clutter interference, target’s shape changing, and low SNR. The experiments also indicate that statistical learning methods like KCF have the potential to be used widely in radar target tracking.
There is still some room for improvement of multi-target tracking in the proposed method. Further, a JCBF-based multi-target tracking framework will be investigated in depth.

Author Contributions

Methodology, J.L.; Software, J.L.; Validation, J.L.; Investigation, J.L.; Data curation, J.L. and Z.W.; Writing—original draft, J.L.; Writing—review & editing, J.L., Z.W., D.C., W.C. and C.C.; Visualization, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 61971392.

Data Availability Statement

Data available in a publicly accessible repository The data presented in this study are openly available in [Sea-detecting X-band Radar Data] at [doi: 10.12000/JR21011], reference number [29]. This data can be found here: https://radars.ac.cn.

Acknowledgments

The authors would like to thank Marine Target Detection Research Group in Naval Aviation University for supplying the radar data used in this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Process of EKF

This paper uses the first-order extended Kalman filter to deal with the nonlinear filtering problems:
x k + 1 = f k ( x k ) + u k ,
z k = H k x k + n k .
where x k = r , θ , ν r , ν θ T , u k and n k are additive Gaussian white noise, and E u k u i T = Q k δ k i , E n k n i T = R k δ k i .
The main steps are as follows:
  • Assuming that the estimation x ^ k 1 and the covariance matrix of the estimation errors k 1 have been obtained at time k 1 .
  • Calculating the Jacobi matrix F x k of f k · :
    F x k = r + ν r T r 2 + 2 r ν r + ν r 2 + ν θ 2 T 2 0 r + ν r T T r 2 + 2 r ν r + ν r 2 + ν θ 2 T 2 ν θ T 2 r 2 + 2 r ν r + ν r 2 + ν θ 2 T 2 ν θ T r 2 + 2 r ν r + ν r 2 + ν θ 2 T 2 1 ν θ T 2 r 2 + 2 r ν r + ν r 2 + ν θ 2 T 2 r + ν r T T r 2 + 2 r ν r + ν r 2 + ν θ 2 T 2 0 0 1 0 0 0 0 1 .
  • Predicting the state x ^ k 1 at time k:
    x ^ k = f k 1 x ^ k 1 .
  • Calculating the covariance matrix:
    k = F x ^ k 1 k 1 F T x ^ k 1 + Q k 1
  • Calculating the Kalman gain matrix:
    K k = k H k T H k k H k T + R k 1
  • Updating the state estimation:
    x ^ k = x ^ k + K k z k H k x ^ k
  • Calculating the covariance matrix:
    k = I K k H k k
x ^ k and k denote the mean m x and the covariance matrix x in (23), respectively:
m x = x ^ k + K k z k H k x ^ k ,
x = I K k H k k .

Appendix B. The Derivation of Motion Model

In the Cartesian coordinate system, the state of the target can be expressed as x , y , ν x , ν y T , where x and y denote the target’s position, ν x and ν y denote the target’s velocity. Assuming that the radar’s scanning period is T, the CV motion model can be expressed as:
x y ν x ν y k + 1 = 1 0 T 0 0 1 0 T 0 0 1 0 0 0 0 1 x y ν x ν y k .
In the polar coordinate system, the motion state of the target can be represented by x k = r , θ , ν r , ν θ T , where r denotes the radial range between the target and the radar, θ denotes the azimuth, ν r denotes the radial velocity and ν θ denotes the tangential velocity. As shown in Figure A1, the parameters of the polar and Cartesian coordinate systems conform to the following transformation relationship [31]:
r = x 2 + y 2 θ = arctan y x ν r = x ν x + y ν y x 2 + y 2 ν θ = x ν y y ν x x 2 + y 2 .
Figure A1. Geometric relationship between the target and the radar.
Figure A1. Geometric relationship between the target and the radar.
Remotesensing 14 05937 g0a1
According to (A12), the motion model (A11) in the polar coordinate system can be expressed as:
r θ ν r ν θ k + 1 = f r θ ν r ν θ k ,
where
f · = r cos θ + ( ν r cos θ ν θ sin θ ) T 2 + r sin θ + ( ν r sin θ + ν θ cos θ ) T 2 arctan r sin θ + ( ν r sin θ + ν θ cos θ ) T r cos θ + ( ν r cos θ ν θ sin θ ) T ( ν r sin θ + ν θ cos θ ) sin θ + ( ν r cos θ ν θ sin θ ) cos θ ( ν r sin θ + ν θ cos θ ) cos θ ( ν r cos θ ν θ sin θ ) sin θ .

References

  1. Kalman, R.E. A new approach to linear filtering and prediction problems. Trans. ASME 1960, 1, 35–45. [Google Scholar] [CrossRef] [Green Version]
  2. Daeipour, E.; Bar-Shalom, Y. An interacting multiple model approach for target tracking with glint noise. IEEE Trans. Aerosp. Electron. Syst. 1995, 31, 706–715. [Google Scholar] [CrossRef]
  3. Julier, S.J.; Jeffrey, K.U.; Hugh, F.D. A new approach for filtering nonlinear systems. In Proceedings of the IEEE 1995 American Control Conference, Seattle, WA, USA, 21–23 June 1995; Volume 3. [Google Scholar]
  4. Mahler, R.P.S. Advances in Statistical Multisource-Multitarget Information Fusion; Artech House: London, UK, 2014. [Google Scholar]
  5. Koch, J.W. Bayesian approach to extended object and cluster tracking using random matrices. IEEE Trans. Aerosp. Electron. Syst. 2008, 44, 1042–1059. [Google Scholar]
  6. Feldmann, M.; Fränken, D.; Koch, W. Tracking of Extended Objects and Group Targets Using Random Matrices. IEEE Trans. Signal Process. 2011, 59, 1409–1420. [Google Scholar] [CrossRef]
  7. Orguner, U. A Variational Measurement Update for Extended Target Tracking with Random Matrices. IEEE Trans. Signal Process. 2012, 60, 3827–3834. [Google Scholar] [CrossRef]
  8. Granstrom, K.; Orguner, U. A phd Filter for Tracking Multiple Extended Targets Using Random Matrices. IEEE Trans. Signal Process. 2012, 60, 5657–5671. [Google Scholar] [CrossRef] [Green Version]
  9. Granström, K.; Natale, A.; Braca, P.; Ludeno, G.; Serafino, F. Gamma Gaussian Inverse Wishart Probability Hypothesis Density for Extended Target Tracking Using X-Band Marine Radar Data. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6617–6631. [Google Scholar]
  10. Marcus, B.; Uwe, D.H. Extended Object Tracking with Random Hypersurface Models. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 149–159. [Google Scholar]
  11. Antonio, Z.; Florian, F.; Marcus, B.; Uwe, D.H. Level-set random hypersurface models for tracking nonconvex extended objects. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2990–3007. [Google Scholar]
  12. Niklas, W.; Emre, O. Extended Target Tracking Using Gaussian Processes. IEEE Trans. Signal Process. 2015, 63, 4165–4178. [Google Scholar]
  13. Shui, P.L.; Li, C.D.; Xu, S.W. Tri-feature-based detection of floating small targets in sea clutter. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 1416–1430. [Google Scholar] [CrossRef]
  14. Sun, J.P.; Liu, C.; Li, Q.; Chen, X.L. Labeled multi-bernoulli filter with amplitude information for tracking marine weak targets. IET Radar Sonar Navig. 2019, 13, 983–991. [Google Scholar] [CrossRef]
  15. Kim, D.; Hwang, I. Dynamic model-based feature aided data association filter in target tracking. IET Radar Sonar Navig. 2019, 14, 279–289. [Google Scholar] [CrossRef]
  16. Zhu, Y.P.; Wang, Z.; Yin, Y.P.; Chen, W.D. Feature-aided Multi-Target Tracking Method in Sea Clutter Using Scanning Radar Data. In Proceedings of the 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 9–11 July 2021; pp. 615–619. [Google Scholar]
  17. Yin, Y.P.; Cheng, D.; Dai, Y.L.; Chen, C.; Chen, W.D. Feature-aided GM-PHD Algorithm for Sea-surface Target Tracking. In Proceedings of the 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 3–5 July 2020; pp. 220–225. [Google Scholar]
  18. Chen, X.L.; Su, N.Y.; Huang, Y.; Guan, J. False-Alarm-Controllable Radar Detection for Marine Target Based on Multi Features Fusion via CNNs. IEEE Sens. J. 2021, 21, 9099–9111. [Google Scholar]
  19. Su, N.Y.; Chen, X.L.; Guan, J.; Huang, Y. Maritime Target Detection Based on Radar Graph Data and Graph Convolutional Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  20. Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-Speed Tracking with Kernelized Correlation Filters. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 583–596. [Google Scholar]
  21. Danelljan, M.; Hager, G.; Khan, F.S.; Felsberg, M. Accurate scale estimation for robust visual tracking. In Proceedings of the British MachineVision Conference, Nottingham, UK, 1–5 September 2014; BMVA Press: Durham, UK, 2014. [Google Scholar]
  22. Li, Y.; Zhu, J. A scale adaptive kernel correlation filter tracker with feature integration. In Proceedings of the European Conference on Computer Vision, Cham, Switzerland, 6–12 September 2014; pp. 254–265. [Google Scholar]
  23. Zhou, Y.; Wang, T.; Hu, R.H.; Su, H. Multiple Kernelized Correlation Filters (MKCF) for Extended Object Tracking Using X-Band Marine Radar Data. IEEE Trans. Signal Process. 2019, 67, 3676–3688. [Google Scholar]
  24. Mehmood, K.; Jalil, A.; Ali, A.; Khan, B.; Murad, M.; Khan, W.U.; He, Y. Context-aware and occlusion handling mechanism for online visual object tracking. Electronics 2020, 10, 43. [Google Scholar] [CrossRef]
  25. Yu, C.L.; Zhan, R.H.; Wan, J.W. Research on Robust UKF Algorithm for Single Observer Passive Target Tracking Based on Polar Coordinates. J. Nat. Univ. Def. Technol. 2008, 30, 7. [Google Scholar]
  26. Bolme, D.S.; Beveridge, J.R.; Draper, B.A.; Lui, Y.M. Visual object tracking using adaptive correlation filters. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2544–2550. [Google Scholar]
  27. Wang, M.M.; Liu, Y.; Huang, Z.Y. Large Margin Object Tracking with Circulant Feature Maps. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4800–4808. [Google Scholar]
  28. Babenko, B.; Yang, M.; Belongie, S. Robust Object Tracking with Online Multiple Instance Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1619–1632. [Google Scholar]
  29. Liu, N.B.; Ding, H.; Huang, Y. Annual Progress of Sea-detecting X-band Radar and Data Acquisition Program. J. Radars 2021, 10, 173–182. [Google Scholar]
  30. Liu, N.B.; Yunlong, D.; Wang, G.Q. Sea-detecting X-band radar and data acquisition program. J. Radars 2019, 8, 656–667. [Google Scholar]
  31. Roecker, J.A.; McGillem, C.D. Target tracking in maneuver centered coordinates. In Proceedings of the 1988 IEEE National Radar Conference, Ann Arbor, MI, USA, 20–21 April 1988; pp. 68–73. [Google Scholar]
Figure 1. Overall flow of the proposed method. (a) Bayes filter module. (b) Correlation filter module. (c) Fusion module. The green lines indicate the feedback for correcting measurements. Both the process of providing measurements for Kalman filter by detector and the training process of the correlation filter are omitted in the figure.
Figure 1. Overall flow of the proposed method. (a) Bayes filter module. (b) Correlation filter module. (c) Fusion module. The green lines indicate the feedback for correcting measurements. Both the process of providing measurements for Kalman filter by detector and the training process of the correlation filter are omitted in the figure.
Remotesensing 14 05937 g001
Figure 2. Common challenging situations. (a) Appearance changing. (b) Interruption measurements.
Figure 2. Common challenging situations. (a) Appearance changing. (b) Interruption measurements.
Remotesensing 14 05937 g002
Figure 3. Twelve trajectories of moving targets by manually labeled.
Figure 3. Twelve trajectories of moving targets by manually labeled.
Remotesensing 14 05937 g003
Figure 4. The tracking results of the proposed JCBF.
Figure 4. The tracking results of the proposed JCBF.
Remotesensing 14 05937 g004
Figure 5. Average location precision and overlap precision of twelve targets. (a) Average location precision curves (DP ∈ [0, 1]) of twelve targets. (b) Average overlap precision curves (OP ∈ [0, 1]) of twelve targets.
Figure 5. Average location precision and overlap precision of twelve targets. (a) Average location precision curves (DP ∈ [0, 1]) of twelve targets. (b) Average overlap precision curves (OP ∈ [0, 1]) of twelve targets.
Remotesensing 14 05937 g005
Figure 6. Some challenging targets in monitor scenarios. (ad) Shape or appearance changing. (eh) Targets interferenced by clutter. (i,j) Weak targets.
Figure 6. Some challenging targets in monitor scenarios. (ad) Shape or appearance changing. (eh) Targets interferenced by clutter. (i,j) Weak targets.
Remotesensing 14 05937 g006
Figure 7. Tracking results of target 1 at some key frames. (af): Tracking results for each filter when the shape of target 1 changes.
Figure 7. Tracking results of target 1 at some key frames. (af): Tracking results for each filter when the shape of target 1 changes.
Remotesensing 14 05937 g007
Figure 8. The response maps of (a) KCF and (b) JCBF (target 1 in frame 23).
Figure 8. The response maps of (a) KCF and (b) JCBF (target 1 in frame 23).
Remotesensing 14 05937 g008
Figure 9. Tracking results of target 2 at some key frames. (af) Tracking results for each filter when target 2 is obscured by clutter.
Figure 9. Tracking results of target 2 at some key frames. (af) Tracking results for each filter when target 2 is obscured by clutter.
Remotesensing 14 05937 g009
Figure 10. The response maps of KCF (target 2 in frame 95 and frame 96). (a) The response maps in frame 95. (b) The response maps in frame 96.
Figure 10. The response maps of KCF (target 2 in frame 95 and frame 96). (a) The response maps in frame 95. (b) The response maps in frame 96.
Remotesensing 14 05937 g010
Figure 11. The response map of JCBF (target 2 in frame 131).
Figure 11. The response map of JCBF (target 2 in frame 131).
Remotesensing 14 05937 g011
Figure 12. Tracking results of target 12 at some key frames. (ac) The rusults of four trackers. (df) The responses of the proposed JCBF.
Figure 12. Tracking results of target 12 at some key frames. (ac) The rusults of four trackers. (df) The responses of the proposed JCBF.
Remotesensing 14 05937 g012
Table 1. Table of notations.
Table 1. Table of notations.
I p Identity matrix of order p
O p p × p matrix with zero entries
· T Transpose
· H Conjugate transpose
· Conjugate
· Module of a vector
· Frobenius norm of a matrix
det · Determinant of a matrix
F · Fourier transform
F 1 · Inverse Fourier transform
m e a n · Mean
Hadamard product
· , · Inner product
N x ; m , 1 2 π det exp 1 2 x m T 1 x m
Table 2. Key parameters settings.
Table 2. Key parameters settings.
ParameterInterpretationValue
w, hSample’s width and height-
ηAdaptation rate0.075
λRegularization0.01
σ k Feature bandwidth0.2
σSpatial bandwidth w h 10
ΔSGradient of scaling pool[0.005, 0.001]T
lLength of scaling pool10
thConfidence threshold10
Table 3. X-band experimental radar parameters.
Table 3. X-band experimental radar parameters.
ParameterValue
Operating Frequency9.3–9.5 GHz
Output Power50 W
Antenna Beamwidth1.2 deg @ −3 dB
Rate of Rotation24 rpm
Number of Range Samples1729
Number of Azimuth Samples2224
Range Resolution Cell6 m
Azimuth Resolution Cell360°/5120
Table 4. Cost time per frame.
Table 4. Cost time per frame.
MethodsRMGGIWASKCFEKFKCFJCBF
Cost time0.0270.0290.2230.0180.1670.275
Table 5. Center location error (CLE ↓, ↓ means lower is better).
Table 5. Center location error (CLE ↓, ↓ means lower is better).
TargetsRMGGIWASKCFEKFKCFJCBF
target111.934.37.834.532.77.3
target223.617.421.530.3145.04.8
target3235.42.3300.02.32.01.8
target46.66.526.66.516.54.3
target536.35.910.533.846.44.0
target65.65.97.97.36.04.2
target72.62.69.65.25.62.6
target86.86.832.95.94.55.9
target94.14.3318.23.83.34.2
target105.45.612.85.65.15.2
target1188.92.4267.761.930.12.0
target1244.13.1296.86.512.53.5
Average39.38.7109.416.725.84.1
Table 6. Average location precision (LP ∈ [0, 1] ↑, ↑ means higher is better).
Table 6. Average location precision (LP ∈ [0, 1] ↑, ↑ means higher is better).
TargetsRMGGIWASKCFEKFKCFJCBF
target10.7500.6480.8340.6450.6880.844
target20.7730.7680.6810.7270.3300.895
target30.1060.9440.1770.9430.9480.954
target40.8580.8600.4590.8610.7160.905
target50.6290.8710.7780.6640.5380.910
target60.8790.8730.8310.8790.8710.908
target70.9380.9390.7980.8880.8760.939
target80.8520.8530.5190.8970.8990.875
target90.9100.9040.0530.9120.9240.911
target100.8830.8770.7330.8800.8860.888
target110.3900.9410.1250.4240.6480.951
target120.5690.9280.2430.8610.7390.919
Average0.7110.8670.5190.7980.7550.908
Table 7. Average overlap precision (OP [0, 1] ∈ ↑, ↑ means higher is better).
Table 7. Average overlap precision (OP [0, 1] ∈ ↑, ↑ means higher is better).
TargetsRMGGIWASKCFEKFKCFJCBF
target10.2270.2710.6030.4490.4730.653
target20.2100.3260.3310.4690.1920.635
target30.0480.6180.1050.7760.7850.774
target40.4430.5040.3120.6060.5340.647
target50.2910.4630.5140.5040.3500.700
target60.3740.4070.5970.6660.5810.687
target70.3220.5190.5190.6370.5620.681
target80.3200.3590.2530.6180.6050.621
target90.1600.4040.0400.6680.6860.672
target100.2450.2990.5300.6870.7040.722
target110.1000.5700.0730.3170.4330.751
target120.1590.5340.1490.5910.3260.670
Average0.2410.4390.3350.5840.5190.684
Table 8. Some challenging targets in monitor scenarios.
Table 8. Some challenging targets in monitor scenarios.
Challenging SituationsTargets
Shape or appearance changingtarget1, target4, target5
Multiple scattering pointstarget1, target4, target5
Interferenced by cluttertarget1, target2, target3
Weak targettarget12
Maneuvering targetstarget1, target2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, J.; Wang, Z.; Cheng, D.; Chen, W.; Chen, C. Marine Extended Target Tracking for Scanning Radar Data Using Correlation Filter and Bayes Filter Jointly. Remote Sens. 2022, 14, 5937. https://doi.org/10.3390/rs14235937

AMA Style

Liu J, Wang Z, Cheng D, Chen W, Chen C. Marine Extended Target Tracking for Scanning Radar Data Using Correlation Filter and Bayes Filter Jointly. Remote Sensing. 2022; 14(23):5937. https://doi.org/10.3390/rs14235937

Chicago/Turabian Style

Liu, Jiaqi, Zhen Wang, Di Cheng, Weidong Chen, and Chang Chen. 2022. "Marine Extended Target Tracking for Scanning Radar Data Using Correlation Filter and Bayes Filter Jointly" Remote Sensing 14, no. 23: 5937. https://doi.org/10.3390/rs14235937

APA Style

Liu, J., Wang, Z., Cheng, D., Chen, W., & Chen, C. (2022). Marine Extended Target Tracking for Scanning Radar Data Using Correlation Filter and Bayes Filter Jointly. Remote Sensing, 14(23), 5937. https://doi.org/10.3390/rs14235937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop