Keywords

1 Introduction

TBD technology is an effective method for detecting and tracking weak targets in a low SNR environment [1]. TBD is a multi-frame signal accumulation technique. Compared with the traditional detection method, TBD does not detect the target by setting a threshold for each frame. Instead, after accumulating multiple frames of data, the target trajectory is given at the same time when the detection result is obtained.

The principle of DP-TBD algorithm is clear and its performance is excellent. It is a research hotspot in recent years. The basic idea of DP-TBD algorithm is to convert the target detection from a multistage decision-making process to multiple single-stage problems. Through each stage, the merit function is optimized to obtain a global optimal solution. The DP-TBD algorithm was originally used for optical image processing and this is the first application of DP in TBD [2]. The DP-TBD algorithm is divided into probability density accumulation and energy accumulation [3, 4]. The first one is suitable for maneuvering targets, but need to know clutter prior distribution (CPD) information. The second one does not require CPD information, and constructs the stage function directly with the target amplitude or energy, but it is only applicable to weak maneuvering targets with approximate trajectories of the motion trajectories. From then on, mountains of work have been done to improve the performance of DP-TBD [5,6,7,8].

This paper proposes an improved DP-TBD algorithm that can effectively achieve the detection and tracking of maneuvering targets. The improved algorithm introduces the acceleration component into the target state vector, which can optimize the prediction of the position and velocity in the Kalman filtering, so that the state transition set can be timely adjusted according to the change of the target speed. In addition, the introduction of state transition probability enables the target to more accurately accumulate energy along the true trajectory. At last, this paper optimizes the termination decision strategy by improving the setting method of the threshold. And the detection performance of maneuvering target is obviously improved.

2 Traditional Energy Accumulation Algorithm Model

2.1 System Model

In this paper, we assume the target is moving radially relative to the radar, and the radar obtains a M × N size measurement sequence for each full scan, which is called a frame and observes a total of K frames. The radar scanning interval is T and the measurement data at frame k can be expressed as

$$ Z_{k} = \{ z_{k} (x,y)|x = 1,2, \ldots ,M;y = 1,2, \ldots ,N;1 \le k \le K\} $$
(1)

Where \( z_{k} (x,y) \) is the measurement recorded in cell (x, y), which is given by

$$ z_{k} (x,y){ = }\left\{ {\begin{array}{*{20}l} {A_{k} + n_{k} (x,y),} \hfill & {\text{no}\;\text{target}\;\text{in}\;\text{cell (x,y)}\;\text{at}\;\text{frame}\;\text{k}} \hfill \\ {n_{k} (x,y),} \hfill & {\text{target}\;\text{in}\;\text{cell (x,y)}\;\text{at}\;\text{frame}\;\text{k}} \hfill \\ \end{array} } \right. $$
(2)

where \( A_{k} \) is the target amplitude, \( n_{k} \) is the measurement noise which obeys normal distribution.

\( X_{k} = [s_{x} (k),v_{x} (k),s_{y} (k),v_{y} (k)] \) represents the position and velocity of the target in the x-direction and y-direction at frame k. The target trajectory sequence from the first frame to the K-th frame is given by

$$ X(k) = \{ X_{1} ,X_{2} , \ldots ,X_{K} \} $$
(3)

\( Z(k) = \{ Z_{1} ,Z_{2} , \ldots ,Z_{K} \} \) is a set of measurement sequences obtained by the target trajectory. \( \hat{X}(k) \) is the target estimation sequence. It is hoped that \( \hat{X}(k) \) is most likely to come from a trajectory of a real target.

2.2 Traditional DP-TBD Algorithm

  1. (1)

    Initialization: For all states \( X_{1} \)

$$ \left\{ {\begin{array}{*{20}l} {I_{1} (x,y) = |z_{1} (x,y)|} \hfill \\ {\varPsi_{1} (x,y) = 0} \hfill \\ \end{array} } \right. $$
(4)

where \( I( \cdot ) \) denotes merit value function, \( \psi ( \cdot ) \) stores states transition records.

  1. (2)

    Recursion: For \( 2 \le k \le K \), for all states \( X_{k} \)

$$ \left\{ {\begin{array}{*{20}l} {I_{k} (x,y) = \mathop { \hbox{max} }\limits_{{(x^{*} ,y^{*} ) \in J_{k} (x,y)}} \{ |z_{k} (x,y)| + I_{k - 1} (x^{*} ,y^{*} )\} } \hfill \\ {\varPsi_{k} (x,y) = \arg \left. {\left\{ {\mathop { \hbox{max} }\limits_{{(x^{*} ,y^{*} ) \in J_{k} (x,y)}} (I_{k - 1} (x^{*} ,y^{*} ))} \right.} \right\}} \hfill \\ \end{array} } \right. $$
(5)

where \( J_{k} (x,y) \) denotes state transition set, which is a set of all possible positions of the target from frame k − 1 to frame k. Determined by the target’s maximum speed \( v_{x}^{\hbox{max} } \) and \( v_{y}^{\hbox{max} } \), as follows:

$$ J_{k} (x,y) = \{ (a,b)|x - v_{x}^{\hbox{max} } \le a \le x + v_{x}^{\hbox{max} } ,y - v_{y}^{\hbox{max} } \le b \le y + v_{y}^{\hbox{max} } \} $$
(6)
  1. (3)

    Judgment and termination: For threshold \( V_{T} \), find

$$ \{ \hat{X}_{K} \} = \{ X_{K} :I_{K} (x,y) > V_{T} \} $$
(7)
  1. (4)

    Backtracking: for \( k = K - 1,K - 2, \ldots ,1 \)

$$ \hat{X}_{k} = \varPsi_{k + 1} (x,y) $$
(8)

Traditional DP-TBD algorithm is only a multi-frame accumulation of measured data, ignoring the state-characteristic relationship between measurement data frames. If the size of the state transition set is not suitable for the algorithm, it will reduce the detection and tracking of the target, especially the radar maneuvering target. Therefore, the selection of the state transition set is very important for the performance of the algorithm.

3 The ASTS-DP-TBD Algorithm

When facing with maneuvering targets, the traditional DP-TBD with constant size of transition set can hardly detect the targets. An improved DP-TBD algorithm is proposed in this paper. In the DP, there exists an energy accumulation path for each present resolution cell (i, j) at frame k, so we can take advantage of positions of the plots included in the path to estimate the target state vector. According to the estimated state vector, we can predict the target state vector at frame k + 1 by the one-step state prediction in the Kalman filtering.

On the one hand, the acceleration component is introduced into the target state vector. This can optimize the prediction of the position and velocity in the Kalman filtering, so that the state transition set can be changed according to the target speed. On the other hand, in the process of target transfer, it is interfered by various noises, which will reduce the probability of energy accumulation of the target on the real trajectory. In this paper, the state transition probability is introduced in the energy accumulation strategy. It can make the target more accurately accumulate energy along the true trajectory and reduce the interference of process noise. Based on these, this paper also optimizes the termination decision strategy of the algorithm by improving the setting method of the threshold. The new threshold can be better adapted, and the detection performance of maneuvering target is obviously improved.

  1. (1)

    The introduction of acceleration component in target state vector

The improved target state vector \( x(k) \) and measurement state vector \( z(k) \) are as follows:

$$ x(k) = [s_{x} (k),v_{x} (k),a_{x} (k),s_{y} (k),v_{y} (k),a_{y} (k)] $$
(9)
$$ z(k) = [\bar{s}_{x} (k),\bar{v}_{x} (k),\bar{a}_{x} (k),\bar{s}_{y} (k),\bar{v}_{y} (k),\bar{a}_{y} (k)] $$
(10)

where \( s_{x} (k) \), \( v_{x} (k) \), \( a_{x} (k) \) is the position, velocity, acceleration on direction x respectively, and \( s_{y} (k) \), \( v_{y} (k) \), \( a_{y} (k) \) is the counterparts on direction y.

In fact, we only have the measurements of target position, as to the measurements of velocity and acceleration are given by

$$ [\bar{v}_{x} (k),\bar{v}_{y} (k)]^{T} = [\bar{s}_{x} (k),\bar{s}_{y} (k)] - [\hat{s}_{x} (k - 1),\hat{s}_{y} (k - 1)],\quad k \ge 2 $$
(11)
$$ [\bar{a}_{x} (k),\bar{a}_{y} (k)] = [\tilde{a}_{x} (k),\tilde{a}_{y} (k)],\quad k \ge 2 $$
(12)

By introducing the acceleration component in the target state vector, the state transition set can be adjusted timely according to the change of the target position and speed. The state transition set is given by

$$ J_{k} (x,y) = \{ (a,b)|x - \tilde{v}_{x} (k) \le a \le x + \tilde{v}_{x} (k),y - \tilde{v}_{y} (k) \le b \le y + \tilde{v}_{y} (k)\} $$
(13)
  1. (2)

    The introduction of state transition probability

In order to make the target more accurately accumulate energy along the true trajectory, this paper introduces the state transition probability in the energy accumulation strategy. In the Kalman filtering process, the probability of the transition from \( x_{k - 1} (x^{'} ,y^{'} ) \) to \( x_{k} (x,y) \) can be calculated as

$$ D_{k - 1} (x^{{\prime }} ,y^{{\prime }} )\begin{array}{*{20}c} { = \frac{1}{{\sqrt {2\pi \sigma } }}\exp ( - \frac{{d^{2} }}{{2\sigma^{2} }})} & {k \ge 2} \\ \end{array} $$
(14)

where \( d = \sqrt {(x^{{\prime \prime }} - x)^{2} + (y^{{\prime \prime }} - y)^{2} } \) denotes the distance between \( (x^{{\prime \prime }} ,y^{{\prime \prime }} ) \) and \( (x,y) \). And \( (x^{{\prime \prime }} ,y^{{\prime \prime }} ) \) is the predicted value of \( (x,y) \) at frame k.

  1. (3)

    ASTS-DP-TBD algorithm steps

Step 1: Initialization. For all states \( X_{1} \)

$$ \left\{ {\begin{array}{*{20}l} {I_{1} (x,y) = |z_{1} (x,y)|} \hfill \\ {\varPsi_{1} (x,y) = [0,0]^{\text{T}} } \hfill \\ \end{array} } \right. $$
(15)
$$ \hat{x}(1) = [\bar{s}_{x} (1),\bar{v}_{x} (1),0,\bar{s}_{y} (1),\bar{v}_{y} (1),0] $$
(16)
$$ \begin{aligned} \hat{P}_{1} & = [\sigma_{w} ,\sigma_{w} /T,\sigma_{w} /T^{2} ,\sigma_{w} ,\sigma_{w} /T,\sigma_{w} /T^{2} ]* \\ & [\sigma_{w} ,\sigma_{w} /T,\sigma_{w} /T^{2} ,\sigma_{w} ,\sigma_{w} /T,\sigma_{w} /T^{2} ] \\ \end{aligned} $$
(17)

where \( P \) is predicted state error covariance matrix.

Step 2: State prediction.

$$ \tilde{x}(k) = F\hat{x}(k - 1),\begin{array}{*{20}c} {} \\ \end{array} k \ge 2 $$
(18)

where \( F \) is state transition matrix and is given by

$$ F = \left[ {\begin{array}{*{20}l} 1 \hfill & T \hfill & {T^{2} /2} \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill & T \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & T \hfill & {T^{2} /2} \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & T \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill \\ \end{array} } \right] $$
(19)

Step 3: Recursion. For \( 2 \le k \le K \), for all states \( X_{k} \)

$$ \left\{ {\begin{array}{*{20}l} {I_{k} (x,y) = \mathop { \hbox{max} }\limits_{{(x^{*} ,y^{*} ) \in J_{k} (x,y)}} \{ |z_{k} (x,y)| + D_{k - 1} (x^{*} ,y^{*} ) + I_{k - 1} (x^{*} ,y^{*} )\} } \hfill \\ {\varPsi_{k} (x,y) = \arg \left. {\left\{ {\mathop { \hbox{max} }\limits_{{(x^{*} ,y^{*} ) \in J_{k} (x,y)}} [I_{k - 1} (x^{*} ,y^{*} )]} \right.} \right\}} \hfill \\ \end{array} } \right. $$
(20)

where \( J_{k} (x,y) \) is state transition probability, which is determined by (13). And \( D_{k - 1} (x^{*} ,y^{*} ) \) is determined by (14).

Step 4: Calculation of predicted state error covariance matrix

$$ \tilde{P}_{k} = F\hat{P}_{k - 1} F^{\text{T}} + C_{w} ,\quad k \ge 2 $$
(21)
$$ G_{k} = \tilde{P}_{k} (\tilde{P}_{k} + C_{v} )^{ - 1} ,\quad k \ge 2 $$
(22)
$$ \hat{P}_{k} = (I - G_{k} )\tilde{P}_{k} ,\quad k \ge 2 $$
(23)

where \( G_{k} \) is filtering gain matrix. \( C_{w} \), \( C_{v} \) is process noise covariance matrix and measurement noise covariance matrix respectively.

Step 5: State estimation.

$$ \hat{x}(k) = \tilde{x}(k) + G_{k} [z(k) - \tilde{x}(k)]\quad k \ge 2 $$
(24)

Step 6: Judgment and termination: For threshold \( V_{T} \), find

$$ \{ \hat{X}_{K} \} = \{ X_{K} :I_{K} (x,y) > V_{T} \} $$
(25)
$$ V_{T} = - b_{n} \cdot \,\ln [ - \ln (1 - p_{d} )] + a_{n} $$
(26)
$$ a_{n} = \mu + \sigma [(2\lg n)^{1/2} - \frac{1}{2}\frac{(\lg (\lg n) + \lg (4\pi ))}{{(2\lg n)^{1/2} }}] $$
(27)
$$ b_{n} = \frac{{(2\lg n)^{1/2} }}{\sigma },\begin{array}{*{20}c} {} \\ \end{array} n = M \times N \times num $$
(28)

where \( p_{d} \) is the detect probability, M and N are the number of distance units in the x and y directions; \( num \) is the number of distance units the target may span between two frames; \( \mu \) and \( \sigma \) are the mean and variance of the merit function obtained by accumulating k frames of the target trajectory.

Step 7: Backtracking: for \( k = K - 1,K - 2, \ldots ,1 \)

$$ \hat{X}_{k} = \varPsi_{k + 1} (x,y) $$
(29)

4 Simulations and Result Analysis

To verify the performance of the improved DP-TBD algorithm, the ASTS-DP-TBD algorithm is compared with other DP-TBD algorithms. Assume that the size of the radar data received per frame is 70 × 60. There are a total of 22 frames of received data, and the interval \( T = 1 \). The target with initial state \( x_{1} = [1,1.6,0,8,2,0]^{{\prime }} \) is to make steering movements that change in both magnitude and direction in the observation area. In addition, we assume that measurement noise obeys a Gaussian distribution. This paper uses target detect probability \( \left( {p_{d} } \right) \) and track probability \( \left( {p_{k} } \right) \) to verify the performance of the algorithm. (1) Detect probability: the probability of maximal accumulation value at last frame exceeds the threshold and error between the estimated target position and the real target position is no more than two units at the last frame. (2) Track probability: the probability of error between the estimated target position and the real target position is no more than two units at every frame. In the simulation we performed 100 Monte Carlo experiments.

Figure 1 shows a comparison of merit functions obtained after 22 frames of accumulation for various DP-TBD algorithms with an SNR of 7 dB. Figure 1a shows the merit function of traditional DP-TBD algorithm. Figure 1b shows the merit function of ASTS-DP-TBD algorithm with no state transition probability (NASTS-DP-TBD). Figure 1c shows the merit function of ASTS-DP-TBD algorithm. The first two algorithms have agglomeration effects so that the merit function amplitude value is not highlighted from clutter. We can see that ASTS-DP-TBD algorithm can well overcome the agglomeration effect and detect the true state of the target more accurately.

Fig. 1.
figure 1

Comparison of merit functions

Figure 2 shows the detect probability of basic DP-TBD algorithm, NASTS-DP-TBD algorithm and ASTS-DP-TBD algorithm. When the state transition number q = 9, the \( p_{d} \) of basic DP-TBD is almost 0. When the state transition number q = 16, the performance of \( p_{d} \) is still not ideal. This is because the mismatch between state transition set and target velocity. From Fig. 2, we can see that the NASTS-DP-TBD and ASTS-DP-TBD have better performance, further more, when considering the state transition probability, the performance of ASTS-DP-TBD can obtain a further escalation. As what Fig. 3 shows, when the track probability closes to 1, the demonded SNR of ASTS-DP-TBD is less more 3 dB compare to the NASTS-DP-TBD. This is because the proposed algorithm uses the measurement data and target motion character jointly to confirm the real target. From Fig. 4, we can see that the NASTS-DP-TBD can effectively track the maneuvering target compare to other algorithms.

Fig. 2.
figure 2

Detect probability

Fig. 3.
figure 3

Track probability

Fig. 4.
figure 4

Comparison of detection and tracking results

5 Conclusions

Traditional DP-TBD with constant size of state transition set can hardly detect and track the maneuvering targets because the mismatch between target velocity and transition set, and ignoring of the state transition probability leads to a loss to the algorithm’s performance. This paper proposes a new method that introduces Kalman filtering and target state transition probability in the traditional algorithm. In addition, this paper also optimizes the termination decision strategy of the algorithm, which significantly improves the detection and tracking performance.