Keywords

1 Introduction

The bearings-only tracking (BOT) problem has been a popular area of research for a sufficient period of time. In particular, it is a nonlinear filtering problem with a goal to estimate the target states (positions and velocities) using only bearing measurements that are corrupted with noise [1]. These measurements are obtained by a moving platform, also called as observer, with the help of one or more sensors, whereas the target can be a ship, an aircraft, or any mobile platform. The problem of target tracking, which considers only bearing angles as measurements, is sometimes referred to as bearings-only target motion analysis (TMA). The nonlinearities in the measurement equation and the associated noises only make the problem even more challenging. Another significant factor is the observability. This problem is not observable, or the filtering solution does not converge until and unless the observer makes a manoeuvre [2].

Significant research has been conducted for BOT problem formulated in two-dimensional space [3, 4]. Since the measurement is nonlinear, the assumption that prior and posterior probability densities follow Gaussian distribution may not be true. Hence, for 2D BOT problems, nonlinear filters with both Gaussian as well as non-Gaussian assumption have been implemented for the solution. For nonlinear systems, the most widely used estimator is the extended Kalman filter (EKF) [5]. The EKF uses the traditional Kalman filter equations and applies it to the nonlinear system by linearising the nonlinear model [6]. In doing so, EKF proves to be computationally efficient over other advanced Kalman filters. Later, various deterministic sampling point filters such as the unscented Kalman filter (UKF) [7], cubature Kalman filter (CKF) [8], cubature quadrature Kalman filter (CQKF) [9], Gauss–Hermite filter (GHF) [10], etc., have been implemented. All these filters assume Gaussianity for the prior and posterior density functions. Filtering accuracy and their performance comparison are widely reported in the literature [11].

Now coming to the filters that follow non-Gaussian assumption, we have the particle filter (PF) [12] and algorithms that make use of Gaussian sum approach [13]. Most of the nonlinear filters mentioned above were implemented in the Gaussian sum approach and evaluated for solving BOT problems [11] In the majority of scenarios described for the study of 2D BOT problems, it was observed that there was significant track-loss associated with the estimator performance. Hence, a track-loss condition was formulated and implemented so that the most accurate estimated values were considered for performance analysis. It was noted that without this condition, certain filters such as EKF diverged or performed with very less accuracy [4]. All these works focused on tracking a target that is assumed to be following a constant velocity motion. There are also numerous works available for tracking a manoeuvring target with bearings-only measurements, in 2D [4].

A pseudolinear estimator for 2D case was introduced [14] to deal with the nonlinearity and convergence problem directly. This approach reformulates the nonlinear bearing measurement into a linear form. But the disadvantage was that the correlation between the new form and the bearing noise leads to bias problems [15, 16]. To overcome this bias problem, some attempts were made, which resulted in the proposal of iterative instrumental variable estimator (IV) [17], and also a bias compensated IV [18]. Now, these were all recursive algorithms. With respect to IV, a batch iterative algorithm was proposed, termed as modified IV estimator [19]. Hence, from this discussion, it is obvious that BOT problem in 2D scenarios is widely studied and addressed.

However, there are relatively less number of works which addresses the BOT problem in 3D geometry [20,21,22,23]. Recently, much focus is on this problem, as researchers have decided to not to ignore the depth, or the height between the target and observer. So, the basic difference here is the addition of one more measurement, the elevation angle. Some of the nonlinear filters which were used for solving BOT problem have already been applied for 3D geometry [24]. In [25], in addition to the problem formulation in cartesian coordinates, tracking algorithms were developed for modified spherical coordinates too. Robust tracking algorithms were also developed for 3D BOT problems [26]. Similar to the PLE and IV estimators developed for 2D geometry, the same concepts with certain modifications were applied to problem formulation in 3D geometry [27].

Here, the authors focus their work on solving 3D BOT problem formulated in Cartesian coordinate system. A comparative study on the performance evaluation of certain sigma point Kalman filters was done. NSKF [28] was a recent addition to the sigma point Kalman filter family, which claims more accuracy than the conventional UKF. Rather than assigning same weights to all the sigma points, NSKF assigns more weights to the sigma points that are nearer to the mean value. The inherent disadvantage in this formulation is the increase in the number of sigma points.

This paper deals with a 3D bearings-only target tracking scenario for which the performance of various filters like EKF, UKF, CKF, and NSKF is studied. The filtering accuracy is analysed using RMSE of resultant position and velocity. To further analyse the accuracy, the effect of initial condition and varying measurement noise is taken into account. The rest of the paper is organised as follows: Sect. 2 formulates the 3D BOT problem in Cartesian coordinates, and the filtering algorithms are mentioned in Sect. 3. The simulation results are discussed in Sect. 4 followed by concluding remarks in Sect. 5.

2 Problem Formulation

The purpose of the 3D bearings-only tracking problem studied in this paper is to accurately estimate the position and velocity of the target from bearing (\(\beta \)) and elevation (\(\epsilon \)) measurements obtained by an observer at fixed-time intervals. The target motion is assumed to be a constant velocity motion during the observation period. We assume that the observer’s motion is deterministic, with known position and velocity. Figure 1 is a Cartesian coordinate frame representing the position of the target and observer.

Fig. 1
figure 1

Target–observer in Cartesian coordinate frame

Let the target and observer state vector be defined as

$$\begin{aligned} {X^\mathrm{{t}}_k}&= {\begin{bmatrix} {x^\mathrm{{t}}_k}&{y^\mathrm{{t}}_k}&{z^\mathrm{{t}}_k}&{{\dot{x}}^\mathrm{{t}}_k}&{{\dot{y}}^\mathrm{{t}}_k}&{{\dot{z}}^\mathrm{{t}}_k} \end{bmatrix}}^\mathrm{{T}} \\ {X^\mathrm{{o}}_k}&= {\begin{bmatrix} {x^\mathrm{{o}}_k}&{y^\mathrm{{o}}_k}&{z^\mathrm{{o}}_k}&{{\dot{x}}^\mathrm{{o}}_k}&{{\dot{y}}^\mathrm{{o}}_k}&{{\dot{z}}^\mathrm{{o}}_k} \end{bmatrix}}^\mathrm{{T}}. \end{aligned}$$

Now, a relative state vector is defined as

$$\begin{aligned} \begin{aligned} \mathcal {X}_{k}&= {X^\mathrm{{t}}_k}- {X^\mathrm{{o}}_k}\\ &= {\begin{bmatrix} x^\mathrm{{t}}_k-x^\mathrm{{o}}_k&y^\mathrm{{t}}_k-y^\mathrm{{o}}_k &z^\mathrm{{t}}_k-z^\mathrm{{o}}_k &{\dot{x}}^\mathrm{{t}}_k-{\dot{x}}^\mathrm{{o}}_k &{\dot{y}}^\mathrm{{t}}_k-{\dot{y}}^\mathrm{{o}}_k &{\dot{z}}^\mathrm{{t}}_k-{\dot{z}}^\mathrm{{o}}_k \end{bmatrix}}^\mathrm{{t}} \\ &={\begin{bmatrix} {x_k}&{y_k}&{z_k}&\dot{{x_k}}&\dot{{y_k}}&\dot{{z_k}} \end{bmatrix}}^\mathrm{{T}}. \end{aligned} \end{aligned}$$
(1)

Let the range vector be \(\mathtt {r}\) and is defined as

$$\begin{aligned} \begin{aligned} \mathtt {r}&= {\begin{bmatrix} x^\mathrm{{t}}_k-x^\mathrm{{o}}_k&y^\mathrm{{t}}_k-y^\mathrm{{o}}_k &z^\mathrm{{t}}_k-z^\mathrm{{o}}_k \end{bmatrix}}^\mathrm{{T}}\\&= {\begin{bmatrix} {x_k}&{y_k}&{z_k} \end{bmatrix}}^\mathrm{{T}}. \end{aligned} \end{aligned}$$
(2)

Now, the actual range or the slant range at any instant is defined as \(r_k\,{=}\,\sqrt{x^2_k+y^2_k+z^2_k}\). From the target–observer geometry shown in Fig. 1, \(\mathtt {r}\) can be expressed in terms of \(\beta \) and \(\epsilon \) as

$$\begin{aligned} \mathtt {r}=r\begin{bmatrix} \cos \epsilon \sin \beta \\ \cos \epsilon \cos \beta \\ \sin \epsilon \end{bmatrix}. \end{aligned}$$

Here, \(\beta \ \in [0, 2\pi ]\) and \(\epsilon \ \in [-\frac{\pi }{2}, \frac{\pi }{2}]\). Moreover, the ground range or the range in x-y plane is defined as \(r_\mathrm{{g}}=r\cos \epsilon \).

2.1 Process Model

The discrete time state equation decribing the constant velocity target dynamics can be expressed as

$$\begin{aligned} {X^\mathrm{{t}}}_k= F_{k-1}X^\mathrm{{t}}_{k-1}+ \mathtt {w}^\mathrm{{t}}_{k-1}, \end{aligned}$$
(3)

where \(F_{k-1}\) is the state transition matrix defined as

$$\begin{aligned} F_{k-1}= \begin{bmatrix} 1 &{} 0 &{} 0 &{} T &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 &{} T &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} T \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \end{bmatrix}. \end{aligned}$$

T is the sampling interval, and \( \mathtt {w}^\mathrm{{t}}_{k-1}\) is a zero mean Gaussian noise, uncorrelated with X.

Now, the relative state vector dynamics using (3) and (1) is defined as [24]

$$\begin{aligned} \mathcal {X}_k= F_{k-1}\mathcal {X}_{k-1}+ \mathtt {w}^\mathrm{{t}}_{k-1}- X^\mathrm{{o}}_k+ F_{k-1}X^\mathrm{{o}}_{k-1}, \end{aligned}$$
(4)

where \({\mathtt {w}^\mathrm{{t}}}_{k-1} \sim N(0, Q_{k-1}) \) and

$$\begin{aligned} Q_{k-1}= \begin{bmatrix} \frac{T^3}{3} q_x &{} 0 &{} 0 &{} \frac{T^2}{2} q_x &{} 0 &{} 0\\ 0 &{} \frac{T^3}{3} q_y &{} 0 &{} 0 &{} \frac{T^2}{2} q_y &{} 0\\ 0 &{} 0 &{} \frac{T^3}{3} q_z &{} 0 &{} 0 &{} \frac{T^2}{2} q_z \\ \frac{T^2}{2} q_x &{} 0 &{} 0 &{} T q_x &{} 0 &{} 0 \\ 0 &{} \frac{T^2}{2} q_y &{} 0 &{} 0 &{} T q_y &{} 0 \\ 0 &{} 0 &{} \frac{T^2}{2} q_z &{} 0 &{} 0 &{} T q_z \end{bmatrix}. \end{aligned}$$

Here, \(q_x, q_y, q_z\) are the power spectral densities of the process noise along the X, Y, and Z axes, respectively. Figure 2 shows the target–observer dynamics considered in this study. It can be seen that the target follows a constant velocity model and the observer manoeuvre is modelled in different constant velocity phases.

2.2 Measurement Model

The measurement model involving the bearing and elevation angle is defined as

$$\begin{aligned} \mathtt {z}_k= h(\mathcal {X}_k)+ \mathtt {v}_k, \end{aligned}$$
(5)

where

$$\begin{aligned} h(\mathcal {X}_k)= \begin{bmatrix} \beta _k \\ \epsilon _k \end{bmatrix} = \begin{bmatrix} \tan ^{-1}(\frac{x_k}{y_k}) \\ \tan ^{-1}\Big (\frac{z_k}{\sqrt{{x_k}^2+ {y_k}^2}}\Big ) \end{bmatrix}. \end{aligned}$$

Here, \(\mathtt {v}_k=[\mathtt {v}_{1,k} \ \mathtt {v}_{2,k}]^\mathrm{{T}}\) is a zero mean white Gaussian noise with covariance matrix \(R_k\), that is \(\mathtt {v}_k \sim N(0, R_k), \ R_k= {\text {diag}}({\sigma _\beta }^2, {\sigma _\epsilon }^2)\). Here, \(\sigma _\beta \) and \(\sigma _\epsilon \) are the standard deviations of error in bearing and elevation angles, respectively.

Fig. 2
figure 2

3D BOT scenario

3 Recursive Bayesian Algorithms

Even though certain batch estimation algorithms have been proposed recently for 3D BOT problem [27], we resort to the implementation of three recursive deterministic sampling point filters. They are the cubature Kalman filter (CKF), unscented Kalman filter (UKF), and the recently proposed new sigma point Kalman filter (NSKF). In addition to this, the extended Kalman filter is also implemented. In this section, Bayesian filtering, the formulation of these filtering algorithms, and their implementation with respect to the problem considered are discussed.

3.1 Bayesian Approach to Filtering

Suppose the discrete state-space process and measurement models are represented as in (4) and (5). In Bayesian filtering, the unknown states and parameters to be estimated are assumed as random variables that follow a particular probability distribution. This probability distribution is termed as prior probability density. The prior information regarding the states to be estimated is combined with the measurements received, via a conditional density function of the states, given the measurements. This probability density function (pdf) is usually termed as posterior probability density function.

In this framework, the two steps involved in calculating the prior and posterior pdfs are the time update step and the measurement update step.

Time update step: The prior pdf, \(p(\mathcal {X}_{k} \vert \mathtt {z}_{1:k-1})\), is obtained using the Chapman–Kolmogorov equation

$$\begin{aligned} p(\mathcal {X}_{k} \vert \mathtt {z}_{1:k-1})= \int p(\mathcal {X}_{k} \vert \mathcal {X}_{k-1})p(\mathcal {X}_{k-1} \vert \mathtt {z}_{1:k-1})d\mathcal {X}_{k-1}. \end{aligned}$$
(6)

Measurement update step: The measurement at time instant k, \(\mathtt {z}_{k}\) and the prior pdf is used to obtain the posterior pdf using Bayes’ rule

$$\begin{aligned} p(\mathcal {X}_{k} \vert \mathtt {z}_{1:k})= \dfrac{p(\mathtt {z}_{k} \vert \mathcal {X}_{k})p(\mathcal {X}_{k} \vert \mathtt {z}_{1:k-1})}{\int p(\mathtt {z}_{k} \vert \mathcal {X}_{k})p(\mathcal {X}_{k} \vert \mathtt {z}_{1:k-1})d\mathcal {X}_{k}}. \end{aligned}$$
(7)

When the process or measurement model becomes nonlinear, the integrals become intractable, and we need to look for suboptimal solutions. This is due to the fact that prior and posterior pdfs may not be Gaussian anymore. To overcome this, one approach is to approximate them with Gaussian distribution and take the mean as a point estimate. Under this assumption, the prior and posterior pdfs can be expressed as \( p(\mathcal {X}_{k} \vert \mathtt {z}_{1:k-1})=N(\hat{\mathcal {X}}_{k \vert k-1},P_{k \vert k-1}) \quad \text{ and } \quad p(\mathcal {X}_{k} \vert \mathtt {z}_{1:k})=N(\hat{\mathcal {X}}_{k \vert k},P_{k \vert k})\). The posterior mean \(\hat{\mathcal {X}}_{k \vert k}\) is the estimated value of state \(\mathcal {X}\).

3.2 Extended Kalman Filter

The extended Kalman filter is an immediate extension of the Kalman filter to nonlinear state estimation problems. The nonlinear process and measurement models are linearised, and then the Kalman filter equations are used. In this problem, since the process model is linear, the same Kalman filter equations for the time update step can be implemented. But for the nonlinear measurement model, a corresponding linearised model has to be computed in the measurement update step. With a suitable assumption of the posterior estimate \(\hat{\mathcal {X}}_{k-1|k-1}\) and covariance matrix \(P_{k-1|k-1}\) at time \(k-1\), the recursive algorithm can be given as

$$\begin{aligned} \begin{aligned} {\hat{\mathcal {X}}}_{k|k-1}=&F_{k-1} {\hat{\mathcal {X}}}_{k-1|k-1}- X^\mathrm{{o}}_k+ F_{k-1}X^\mathrm{{o}}_{k-1} \\ {\bar{P}}_{k|k-1}=&F_{k-1} {P}_{k-1|k-1} F^\mathrm{{T}}_{k-1}+ Q \end{aligned} \end{aligned}$$

Now, the measurement update step involves the formation of Jacobian matrix followed by innovation covariance and Kalman gain, given as

$$\begin{aligned} \begin{aligned} H_{k}=&\frac{\partial {h(\hat{\mathcal {X}}_{k|k-1})}}{\partial {\hat{\mathcal {X}}_{k|k-1}}} \\ S_k=&H_k {\bar{P}}_{k|k-1} H^\mathrm{{T}}_k+ R_k \\ K_k=&{\bar{P}}_{k|k-1}H^\mathrm{{T}}_k {S^{-1}_k} \end{aligned} \end{aligned}$$

Then, the posterior mean and covariance are found out as

$$\begin{aligned} {\hat{\mathcal {X}}}_{k|k}=&{\hat{\mathcal {X}}}_{k|k-1}+ K_k[\mathtt {z}_k- h({\hat{\mathcal {X}}}_{k|k-1})] \\ P_{k|k}=&{\bar{P}}_{k|k-1}- K_k S_k K^\mathrm{{T}}_k \end{aligned}$$

3.3 Unscented Kalman filter

Unscented Kalman filter (UKF)[7] was the first suboptimal filter that introduced the concept of numerical integration of the intractable integrals using a set of deterministic points and weights. The major advantage over the already existing EKF was that the algorithm was derivative free. It proved to be more accurate and less sensitive to initial condition mismatch when compared with the EKF. The main concept was to implement the numerical approximation using the unscented transformation.

Suppose that the integral to be approximated is of the form

$$\begin{aligned} \mathtt {I}(\mathcal {X})=\int h(\mathcal {X})p_{\mathcal {X}}(\mathcal {X})\,{\text {d}}\mathcal {X} \end{aligned}$$

where \(\mathcal {X}\) is a random variable that is assumed to follow a Gaussian density \(p_{\mathcal {X}}(\mathcal {X}) \sim N(\hat{\mathtt {X}},P)\). Now, since \(\mathtt {I}(\mathcal {X})\) is not a tractable one, a numerical approximation with the help of unscented transformation is defined as

$$\begin{aligned} \mathtt {I}(\mathcal {X})\approxeq \sum _{i=1}^{2n+1}w_i h(\hat{\mathcal {X}}_i), \quad \text{ where } \text{ n } \text{ is } \text{ the } \text{ order } \text{ of } \text{ the } \text{ system. } \end{aligned}$$

Here

$$\begin{aligned} \begin{aligned} \hat{\mathcal {X}}_1&=\hat{\mathtt {X}}\\ \hat{\mathcal {X}}_i&=\hat{\mathtt {X}}+\big (\sqrt{(n+\kappa )P}\big )_i, \quad i=1,\ldots ,n\\ \hat{\mathcal {X}}_i&=\hat{\mathtt {X}}-\big (\sqrt{(n+\kappa )P}\big )_i, \quad i=1,\ldots ,n \qquad \text{ and } \\ w_1&=\dfrac{\kappa }{n+\kappa }, \quad w_i=\dfrac{1}{2(n+\kappa )}, \ \text{ for } \ i=1,\ldots ,2n. \end{aligned} \end{aligned}$$
(8)

3.4 New Sigma Point Kalman Filter

In UKF, the sigma point equal to mean is given the maximum weight, whereas all other sigma points are assigned equal weights. In [28], this weight assignment was revisited because of the fact that the sigma points which are nearer to the mean value should have higher probability of occurrence. Accordingly, a new sigma point transformation was formulated where sigma points closer to the mean were given more weights when compared to the sigma points that are further away from mean. This new transformation also took care of the criteria that first two moments are matched exactly. Taking the same assumption above, \(\mathcal {X}\sim N(\hat{\mathtt {X}},P)\), the new transformation is defined as

$$\begin{aligned} \begin{aligned}&\hat{\mathcal {X}}_1=\hat{\mathtt {X}}, \quad w_1= 1-\frac{\sum _{i=1}^{n} \alpha _{i}}{{2}(\sum _{i=1}^{n} \alpha _{i}+\mathtt {b})}\\&\hat{\mathcal {X}}_{i+1}=\hat{\mathtt {X}}+ \sqrt{ \frac{\sum _{i=1}^{n} \alpha _{i}+\mathtt {b}}{\mathtt {m}\alpha _{i}}} S_{i}, \quad w_{i+1}=\frac{\mathtt {m}\alpha _{i}}{4(\sum _{i=1}^{n} \alpha _{i}+\mathtt {b})}, \ i=1,\ldots ,n \\&\hat{\mathcal {X}}_{i+1}=\hat{\mathtt {X}}- \sqrt{ \frac{\sum _{i=1}^{n} \alpha _{i}+\mathtt {b}}{\mathtt {m}\alpha _{i-n}}} S_{i-n}, \quad w_{i+1}=\frac{\mathtt {m}\alpha _{i-n}}{4(\sum _{i=1}^{n} \alpha _{i}+\mathtt {b})}, \ i=n+1,\ldots ,2n \\&\hat{\mathcal {X}}_{i+1}=\hat{\mathtt {X}}+ \sqrt{ \frac{\sum _{i=1}^{n} \alpha _{i}+\mathtt {b}}{(1-\mathtt {m})\alpha _{i-2n}}} S_{i-2n}, \ w_{i+1}=\frac{(1-\mathtt {m})\alpha _{i-2n}}{4(\sum _{i=1}^{n} \alpha _{i}+\mathtt {b})}, \ i=2n+1,\ldots ,3n \\&\hat{\mathcal {X}}_{i+1}=\hat{\mathtt {X}}- \sqrt{ \frac{\sum _{i=1}^{n} \alpha _{i}+\mathtt {b}}{(1-\mathtt {m})\alpha _{i-3n}}} S_{i-3n}, \ w_{i+1}=\frac{(1-\mathtt {m})\alpha _{i-3n}}{4(\sum _{i=1}^{n} \alpha _{i}+\mathtt {b})}, \ i=3n+1,\ldots ,4n. \end{aligned} \end{aligned}$$
(9)

Here, S is defined such that \(SS^\mathrm{{T}}=P\), and \(S_i \ \text{ and } \ P_i \) denote the \(i^{th}\) column of \(S \ \text{ and } \ P\), respectively. The variable \(\alpha \) is defined as \(\alpha _{i}=\frac{\mid <\mathtt {X}_i,P_{i}>\mid }{\parallel \mathtt {X}_i \parallel _2 \parallel P_{i}\parallel _2}\). \(\mathtt {b}\) is a real constant such that \( \mathtt {b}> \lbrace \frac{1}{4} \max \left( \mathtt {m}\alpha _{i}\right) -\frac{1}{2}\sum _{i=1}^{n} \alpha _{i}\rbrace \), and \(\mathtt {m}\) is chosen as \(\mathtt {m} \in (0.5, 1)\).

If we represent the total number of sigma points with \(\mathcal {N}\), then from this transformation, the value of \(\mathcal {N}\) becomes \(4n+1\). But in case of the UKF, value of \(\mathcal {N}=2n+1\). Compared to the UKF, NSKF has an advantage of having two tuning parameters, \(\mathtt {b}\) and \(\mathtt {m}\). The algorithm for implementing UKF and NSKF for the 3D BOT problem is given below.

figure a

4 Simulation Results

The simulation and results comprise a performance comparison of EKF, UKF, CKF, and NSKF for 1000 Monte Carlo runs. The entire tracking scenario is implemented and simulated in MATLAB software. The initial parameter values required for generating the scenario shown in Fig. 2 are given in Table 1. Here, target and observer initial position in the three coordinates are given. The resultant velocity of the target and observer is given, which is used for finding the velocities in the three coordinate axes. From Fig. 2, it should be noted that target position in Z position is decreasing. The bearing angle \(\beta \) is calculated with reference to the Y-axis.

Table 1 Target and observer initial parameters

For each Monte Carlo run, the target is assumed to start at a position defined by the initial measurement received. According to this measurement, a suitable assumption for range r, target speed s, bearing, and elevation angle is considered. Here, these values are mentioned in Table 2. The variables in the process noise, defined for calculating process noise covariance matrix Q, are taken as \(q_x= q_y= 0.01\,{\text {m}}^2/{\text {s}}^3\) and \(q_z= 10^{-4} {\text {m}}^2/{\text {s}}^3\). The motion of the observer is deterministic and moves in a plane parallel to the X-Y plane at a fixed height of 10 km.

4.1 Filter Initialisation

All the filters in this paper are initialised using the method given in [24] with the parameters shown in Table 2. The relative state is initialised depending on the first measurement of bearing and elevation angle and the prior range estimate of the target. The \(\overline{r}\sim N(r, {\sigma _\mathrm{{r}}}^2)\) is the prior range estimate of the target, with r as the assumed true initial range of the target from the observer. The true initial bearing and elevation measurement estimate are \(\hat{\beta }_1\) and \(\hat{\epsilon }_1\) with headings \(\overline{\alpha }_1\) and \(\overline{\gamma }_1\), respectively. Also, \(\overline{s}\sim N(s, {\sigma _\mathrm{{s}}}^2)\) is the initial speed estimate for the target.

Table 2 Parameters for defining \( \hat{\mathcal {X}}_{0|0} \ \& \ P_{0|0}\)

The initial estimate (\(\hat{\mathcal {X}}_{0|0}\)) for relative state vector is given as [24]

$$\begin{aligned} \, \hat{\mathcal {X}}_{0|0}= \begin{bmatrix} \overline{r} \eta _{1,0}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{0,1}({\hat{\beta }}_1, {\sigma _{\beta }}^2) \\ \overline{r}\eta _{1,0}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{1,0}({\hat{\beta }}_1, {\sigma _{\beta }}^2) \\ \overline{r}\eta _{0,1}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2) \\ \overline{s}\eta _{1,0}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{0,1}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2)- {{\dot{x}}^\mathrm{{o}}}_1 \\ \overline{s}\eta _{1,0}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{1,0}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2)- {{\dot{y}}^\mathrm{{o}}}_1 \\ \overline{s}\eta _{0,1}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)- {{\dot{z}}^\mathrm{{o}}}_1 \end{bmatrix}, \end{aligned}$$

where \(\eta _{1,0}(\mu ,\sigma ^2)= \cos {\mu }exp(-\sigma ^2/2) \ \text{ and } \ \eta _{0,1}(\mu ,\sigma ^2)= \sin {\mu }exp(-\sigma ^2/2)\). Also, \(({{\dot{x}}^\mathrm{{o}}}_1, {{\dot{y}}^\mathrm{{o}}}_1, {{\dot{z}}^\mathrm{{o}}}_1)\) is the initial velocity components of the observer state vector.

Let P be the covariance matrix, whose (ij)th element is represented as \(P_{i,j}\). The upper triangular nonzero elements of the covariance matrix P are [24]

$$\begin{aligned} P_{11}=&({\sigma _r}^2+{\overline{r}}^2) \eta _{2,0}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{0,2}({\hat{\beta }}_1, {\sigma _{\beta }}^2)- \overline{r}^2 \eta _{1,0}^2({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{0,1}^2({\hat{\beta }}_1, {\sigma _{\beta }}^2) \\ P_{12}=&({\sigma _r}^2+{\overline{r}}^2) \eta _{2,0}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{1,1}({\hat{\beta }}_1, {\sigma _{\beta }}^2)- \overline{r}^2 \eta _{1,0}^2({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{1,0}({\hat{\beta }}_1, {\sigma _{\beta }}^2)\eta _{0,1}({\hat{\beta }}_1, {\sigma _{\beta }}^2) \\ P_{13}=&[({\sigma _r}^2+{\overline{r}}^2) \eta _{1,1}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)- \overline{r}^2 \eta _{1,0}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{0,1}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)]\eta _{0,1}({\hat{\beta }}_1, {\sigma _{\beta }}^2)\\ P_{22}=&({\sigma _r}^2+{\overline{r}}^2) \eta _{2,0}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{2,0}({\hat{\beta }}_1, {\sigma _{\beta }}^2)- \overline{r}^2 \eta _{1,0}^2({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{1,0}^2({\hat{\beta }}_1, {\sigma _{\beta }}^2) \\ P_{23}=&[({\sigma _r}^2+{\overline{r}}^2) \eta _{1,1}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)- \overline{r}^2 \eta _{1,0}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)\eta _{0,1}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)]\eta _{1,0}({\hat{\beta }}_1, {\sigma _{\beta }}^2) \\ P_{33}=&({\sigma _r}^2+{\overline{r}}^2) \eta _{0,2}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2)- \overline{r}^2 \eta _{0,1}({\hat{\epsilon }}_1, {\sigma _{\epsilon }}^2) \\ P_{44}=&({\sigma _s}^2+{\overline{s}}^2) \eta _{2,0}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{0,2}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2)- \overline{s}^2 \eta _{1,0}^2({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{0,1}^2({\overline{\alpha }}_1, {\sigma _{\alpha }}^2) \\ P_{45}=&({\sigma _s}^2+{\overline{s}}^2) \eta _{2,0}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{1,1}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2)- \overline{s}^2 \eta _{1,0}^2({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{1,0}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2)\eta _{0,1}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2) \\ P_{46}=&[({\sigma _s}^2+{\overline{s}}^2) \eta _{1,1}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)- \overline{s}^2 \eta _{1,0}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{0,1}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)]\eta _{0,1}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2) \\ P_{55}=&({\sigma _s}^2+{\overline{s}}^2) \eta _{2,0}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{2,0}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2)- \overline{s}^2 \eta _{1,0}^2({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{1,0}^2({\overline{\alpha }}_1, {\sigma _{\alpha }}^2) \\ P_{56}=&[({\sigma _s}^2+{\overline{s}}^2) \eta _{1,1}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)- \overline{s}^2 \eta _{1,0}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)\eta _{0,1}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)]\eta _{1,0}({\overline{\alpha }}_1, {\sigma _{\alpha }}^2) \\ P_{66}=&({\sigma _s}^2+{\overline{s}}^2) \eta _{0,2}({\overline{\gamma }}_1, {\sigma _{\gamma }}^2)- \overline{s}^2 \eta _{0,1}^2({\overline{\gamma }}_1, {\sigma _{\gamma }}^2), \end{aligned}$$

where

$$\begin{aligned} \eta _{2,0}(\mu ,\sigma ^2)=&[\sinh {\sigma ^2}+ \cos ^2{\mu } exp(-\sigma ^2)] exp(-\sigma ^2) \\ \eta _{0,2}(\mu ,\sigma ^2)=&[\sinh {\sigma ^2}+ \sin ^2{\mu } exp(-\sigma ^2)] exp(-\sigma ^2) \\ \eta _{1,1}(\mu ,\sigma ^2)=&\cos {\mu } \sin {\mu } exp(-2\sigma ^2). \end{aligned}$$

Since P is a symmetric matrix, the property that \(P_{ij}= P_{ji}\) gives the lower triangular elements of the covariance matrix. Hence, \(P_{0|0}\) is defined as

$$\begin{aligned} P_{0|0}= \begin{bmatrix} P_{11} &{} P_{12} &{} P_{13} &{} 0 &{} 0 &{} 0\\ P_{12} &{} P_{22} &{} P_{23} &{} 0 &{} 0 &{} 0 \\ P_{13} &{} P_{32} &{} P_{33} &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} P_{44} &{} P_{45} &{} P_{46} \\ 0 &{} 0 &{} 0 &{} P_{45} &{} P_{55} &{} P_{56} \\ 0 &{} 0 &{} 0 &{} P_{64} &{} P_{56} &{} P_{66} \end{bmatrix}. \end{aligned}$$

4.2 Performance Criteria

Two measures of performance using root mean square error (RMSE) are used to compare the various filters. RMSE was calculated for the target states, which were found out from the estimated relative states and from the deterministic observer values. We found out the RMSE for both resultant position as well as for the resultant velocity. The first measure defined was the RMSE for position and velocity averaged from the end of the last observer manoeuvre to the end of observation or simulation period. The final RMSE for position and velocity at the end of the simulation period was considered to be the second measure of performance. These statistics are computed by averaging over 1000 realisations, for bearing and elevation measurement noise standard deviations of \({0.057^{\circ }}, {0.65^{\circ }}, {0.8^{\circ }}\), and \({1^{\circ }}\). The effect of increase in initial uncertainty on filter accuracy is also studied. The expression for RMSE in position is given as

$$\begin{aligned} {\text {RMSE}}_k= \sqrt{\frac{1}{M} \sum \limits _{j=1}^{M}[({x^\mathrm{{t}}}_{j,k}- {{\hat{x}}^\mathrm{{t}}}_{j,k})^2+ ({y^\mathrm{{t}}}_{j,k}- {{\hat{y}}^\mathrm{{t}}}_{j,k})^2+ ({z^\mathrm{{t}}}_{j,k}- {{\hat{z}}^\mathrm{{t}}}_{j,k})^2]} \end{aligned}$$

where k and M denote the total number of time steps and Monte Carlo runs, respectively. A similar expression for RMSE in velocity was implemented for the study. As compared to the 2D BOT scenarios widely available in the literature, the 3D scenario considered in this study did not incur any track-loss. Hence, the need for implementing any track-loss condition for performance analysis is ruled out.

4.3 Performance Comparison

For simulation, the sampling time is considered as \(T=1\) s, and the total observation period lasted for \(T= 210\) s. The tracking performance of NSKF is shown in Fig. 3 for a single run. The estimated target path is plotted along with the truth target path and the observer path in this figure. It is observed that the NSKF accurately tracks the target.

Fig. 3
figure 3

Estimated and truth target path in X-Y plane

Fig. 4
figure 4

RMSE in position

Fig. 5
figure 5

RMSE in velocity

RMSE in position and velocity is plotted in Figs. 4 and 5 to compare the estimation accuracy of various filters. From these figures, it can be observed that all filters performed with comparable accuracy. The results are shown in Tables 3 and 4. For the particular target–observer scenario discussed in the paper, it can be seen that all the filters give comparable results. The effect of increase in the measurement noise can be seen in the increase of the RMSE value at the final time for both position and velocity resulting in lower filter accuracy. From Table 3, when the measurement noise is low (i.e. standard deviation is \({0.057^{\circ }}\)), EKF gives comparable response. For higher measurement noise standard deviations (i.e. \({1^{\circ }}\)), it can be seen that NSKF performs with slightly higher accuracy. However, in the case of RMSE in velocity from Table 4 EKF and NSKF, both have comparable accuracy for lower measurement noise. But with the increase in measurement noise, NSKF in comparison with other filters performs with slightly better accuracy. It should be also noted that, when the noise in bearing and elevation angles is increased from \({0.057^{\circ }}\) to \({1^{\circ }}\), the error in RMS position shifted from 631 m to 5.9 km. This indicates a significant decrease in the accuracy of the nonlinear filters implemented.

The initial covariance matrix was computed as \(P= \nu \times P_{0|0}\) where the value of \(\nu = 1.5, 2, 3\) to analyse the effect of initial uncertainty. Here, the measurement standard deviation for bearing and elevation angle was set to be \({1^{\circ }}\). From Tables 5 and 6, it can be seen that EKF in comparison with NSKF shows more error with the increase in the uncertainty. But still, it cannot be considered as a significant increase in estimation accuracy, as it is comparable.

Table 3 RMSE in position
Table 4 RMSE in velocity
Table 5 RMSE in position with varying \(P_{0|0}\)
Table 6 RMSE in velocity with varying \(P_{0|0}\)

5 Conclusion

This paper presents a comparative study of certain nonlinear sigma point Kalman filters for solving the problem of 3D bearings-only target tracking. The filters used were EKF, UKF, CKF, and NSKF. The filtering accuracy is analysed using the RMSE in both position and velocity. From the RMSE, we focused on two criteria: (1) RMS error in position and velocity at the final simulation time step (2) the time-averaged RMS position and velocity error, after the end of observer manoeuvre. To make it more meaningful, the two criteria were evaluated by increasing the measurement noise standard deviation in bearing as well as elevation angles. It is observed that the accuracy of all filters decreases with the increase in measurement noise, but at a similar rate without any significant difference. A very small increase in accuracy was observed for the sigma point filters when compared to the EKF. We would also like to emphasise the fact that these results are valid only for the particular tracking scenario considered, and not a generalised one for 3D BOT problems. Except for highly nonlinear scenarios (where the rate at which the bearing angle rate is very high)[11], we expect the filters to perform in a similar fashion as described in the paper. An attempt was made to evaluate the two criteria with varying initial uncertainty. Here also, comparable results were obtained for sigma point filters with respect to the EKF.