Keywords

1 Introduction

Nowadays real-time and high-accuracy localization has been considered for many civil and military applications. In many scenarios, fusion and cooperative methods are sufficient to provide qualified accuracy positioning support for general requirements [15, 20]. Zihajehzadeh et al. [20] combined the characteristics of instantaneous high-precision measurement of IMU and accumulated-error-free of TOA. However, external beacons are still required, which is not suitable for large area tracking. Xu et al. [15] also provided a reliable implementation with IMU/TOA fusion for human motion tracking. They mounted several sensing nodes onto human joints, to realize long-term and large-distance requirements. Nevertheless, it is difficult to generalize to generic swarm robots tracking applications due to its strict model requirements. Filtering methods are also widely considered to solve cooperative fusion problems. Kalman-like filters have been widely applied for many important problems. Kalman Filter [16] is the most widely adopted Bayesian estimator to minimize the variance of the estimation error. However, it strictly requires the system model to be linear and assumes the noise to be Gaussian white noise. Extended Kalman filter (EKF) [3] is considered with linearizing the nonlinear state model. However, due to the selection of linearization points and the abandonment of higher-order terms, there will be an inevitable linearization error. To solve this, You et al. [18] proposed an unscented Kalman filter (UKF) to realize UWB and IMU fusion in indoor localization of quad-rotors. It can approximate the posterior distribution based on sampling points, but the non-Gaussian noise problem remains unsolved.

Most of the existing IMU/TOA fusion literature mainly considered the ambient noise as Gaussian, such as [11, 13, 17]. However, in certain conditions such as suburban and urban environments, due to man-made environmental factors, different sensor measurements often show various distribution characteristics, most of which are usually non-Gaussian. TOA distance ranging is easily influenced by the multi-path and non-line of sight (NLOS) factors. Typically, ranging errors can be modeled as a Gaussian distribution in line of sight (LOS) scenarios [14]. In NLOS scenarios, ranging errors can be modeled as Gaussian distribution [17], log-normal [1], or other non-Gaussian distributions [10, 12]. Besides, IMU measurement noise is often non-Gaussian and random with significant impulse characteristics [4]. \( \alpha \)-stable distribution [8, 10] and Student-t distribution [7, 19] are often considered in noise modeling of inertial sensors, and it is generally believed that IMU’s noise is non-Gaussian and exhibits different characteristics in various environments.

Therefore, the Gaussian assumption to some extent does not conform to the real-world noise situation. There are still challenges in realizing a fusion positioning solution for arbitrary noise distribution. Wang et al. [11] proposed the particle filter (PF) based on Monte Carlo sampling, which uses the average value of a set of weighted particles to estimate the mean and covariance of the state, and approximates the posterior distribution in the region containing the significance probability. However, in the process of re-sampling, it faces the problem of particle degradation and depletion [5]. Besides, for high-dimensional problems, high complexity is usually inevitable [9].

In this perspective, A Gaussian condensation filter method is proposed to effectively handle the arbitrary noise distribution, aiming at the non-Gaussian noise problem of swarm robots tracking. To conquer the cumulative error problem of swarm robots tracking, an error-ellipse re-sampling-based Gaussian condensation filter is proposed. Furthermore, a spatial-constrained Gaussian condensation filter is proposed to effectively improve the information fusion in swarm robots network.

2 Cooperative Gaussian Condensation Filter

In this section, we first detail the Gaussian condensation filter to handle non-Gaussian noise under non-ideal conditions. On this basis, we optimize the sampling points with the use of its geometric position and fuse spatial distance measurements between mobile robots. Then, we introduce an error-ellipse re-sampling-based Gaussian condensation filter algorithm to accomplish the temporal estimation. After that, a cooperative Gaussian condensation filter is established by considering spatial-temporal constraints.

2.1 Gaussian Condensation Filter Based on Error-Ellipse Re-Sampling

GCF [6] recursively generates the posteriori probability density function. The key ideas of GCF are illustrated:

1) Prediction. It is assumed that the state transition process obeys the first-order Markov model, namely \( p(X_k|X_{1:k-1})=p(X_k|X_{k-1}) \). At time k, a priori probability distribution \( p(X_{k}|Z_{1:k-1}) \) can be calculated by integration of the product of \( p(X_k|X_{k-1}) \) with \( p(X_{k-1}|Z_{1:k-1}) \). However, the integral is extremely complicated in non-Gaussian systems. It can only be efficiently solved when the involved functions are Gaussian or sums of deltas, which are the intrinsic properties of Kalman-like and particle filters. Moreover, according to the central limit theorem, any statistical distribution could be approximated by a mixture of Gaussian, whose number of components is much smaller than that of using a mixture of deltas [11]. In this study, we consider the state equation is linear but the posteriori distribution is the Gaussian mixture model, namely \(\hat{p}(X_{k-1}|Z_{1:k-1})=\sum _{i=1}^{m}\alpha _i\mathcal N(X_{k-1};\mu _{k-1|k-1}^{(i)},Q _{k-1|k-1}^{(i)})\). Then the prediction step is expressed as:

$$\begin{aligned} \tilde{p}(X_{k}|Z_{1:k-1})=\sum _{i=1}^{m}\alpha _i\mathcal N(X_k;\mu _{k|k-1}^{(i)},Q _{k|k-1}^{(i)}) \end{aligned}$$
(1)

where \( \mu _{k|k-1}^{(i)}=F_k\mu _{k-1|k-1}^{(i)} \), \(Q_{k|k-1}^{(i)}=F_kQ_{k-1|k-1}^{(i)}F_k^T+A \). m is the number of Gaussian kernels and A is the covariance matrix of noise \(\sigma _k\).

2) Update. The a posteriori probability density function is updated by measurements \( Z_k \) at time k. Based on Eq. (1), the update step is expressed as:

$$\begin{aligned} \tilde{p}({X_k}|{Z_{1:k}}) \propto \sum \nolimits _{i = 1}^m {{\alpha _\mathrm{{i}}}\mathcal N({X_k};\mu _{k|k - 1}^{(i)},Q_{k|k - 1}^{(i)})} p({Z_k}|{X_k}) \end{aligned}$$
(2)

3) Gaussian Condensation. For general nonlinear/non-Gaussian filters, the number of sufficient statistics characterizing the true posteriori distribution increases without bound [2]. To avoid this situation, we intend to obtain a closed-form solution with the resort to approximate the posteriori distribution into a Gaussian mixture model. The following theorem could be used.

Let p(X) be the probability density function of a random vector \(X\epsilon {R}^{n}\) and \(\lambda =(\alpha _1,\cdots , \alpha _m,\mu _1,\cdots , \mu _m,\Sigma _1,\cdots , \Sigma _m)\)be the parameters characterizing a mixture of m Gaussian distributions, namely \(q(X;\lambda )=\sum _{i=1}^{m}\alpha _i\mathcal N(X;\mu _i,\Sigma _i)\). If \(\varPhi (\lambda )\) is the KL-divergence between p(X) and \(q(X;\lambda )\), namely

$$\begin{aligned} \varPhi (\lambda )=D_{KL}(p(X),q(X;\lambda ))=E_p\{log\frac{p}{q}\} \end{aligned}$$
(3)

then \(\lambda ^*=(\alpha _1^*,\cdots , \alpha _m^*,\mu _1^*,\cdots , \mu _m^*,\Sigma _1^*,\cdots , \Sigma _m^*) \) is a stationary point of \(\varPhi \), where

$$\begin{aligned} q_i(X;\lambda )=\alpha _i\mathcal N(X;\mu _i,\Sigma _i), \qquad \alpha _i=E_p\{\frac{q_i(X;\lambda )}{q(X;\lambda )} \} \end{aligned}$$
(4)
$$\begin{aligned} \mu _i=\frac{E_p\{\frac{q_i(X;\lambda )}{q(X;\lambda )} X\}}{E_p\{\frac{q_i(X;\lambda )}{q(X;\lambda )}}, \qquad \Sigma _i=\frac{E_p\{\frac{q_i(X;\lambda )}{q(X;\lambda )} (X-\mu _i)(X-\mu _i)^T\}}{E_p\{\frac{q_i(X;\lambda )}{q(X;\lambda )}} \end{aligned}$$
(5)

and \( E_p(\cdot ) \) indicates the expectation over random vector p.

Based on the Bayesian recursive inference criterion, we can conclude that if the priori estimation is biased, the subsequent state estimation would also be affected. To suppress the impact of previous estimation errors, error-ellipse re-sampling performs replicating, retaining, and discarding on sampling points at different levels [11]. With this constrained re-sampling strategy, the posteriori probability density \(\tilde{p}\) could obtained by recursively prediction and update. Considering that the sufficient statistic \(\tilde{p}\) may infinitely increase in temporal series, Gaussian condensation theory is used to approximate \(\tilde{p}\) as Gaussian mixture distribution \(\hat{p}\). Finally, the state corresponding to the maximum of \(\hat{p}\) is the expected estimation at the current moment.

2.2 Cooperative Gaussian Condensation Filter Based on Error-Ellipse Re-sampling

\(\{P_{k,1},P_{k,2},\cdots ,P_{k,M}\}\) works as the prior knowledge of further cooperative Bayesian optimization. Then, we define the joint state of ith robot and jth robot at time k as \(r_k=[P_{k,i},P_{k,j}]^T\). We introduce a new state vector \(z=Tr \in \epsilon {R}^4\), where r represents the joint state vector of the robot, and z is a Gaussian distribution with mean \({u_z}\) and covariance \(C_z\), i.e., \(z\sim \mathcal N(u_z,C_z)\). Therein, \(u_z=Tu_r\) and \(C_z=TC_rT^T\) could be obtained. After adding the distance constraint, we obtain new constraint information c:\(||\rho T^{-1} z||=||z_1|| \le S\). Therefore, we only need to calculate the integral about \(p(z_1|c)\). Then, the posteriori mean \(u_{z_1|c}\) and covariance \(C_{z_1|c}\) could be got by affine transformation without calculating p(z|c) directly.

In order to avoid calculating complex numerical integration, we use a convex combination to approximate the conditional mean and covariance:

$$\begin{aligned} {{\hat{u}}_{{z_1}|c}} \simeq \sum \limits _{i = 0}^{2n} {{w^{(i)}}z_1^{(i)}},\qquad {{\hat{D}}_{{z_1}|c}} \simeq \sum \limits _{i = 0}^{2n} {{w^{(i)}}z_1^{(i)}} {\left( {z_1^{(i)}} \right) ^\mathrm{T}} \end{aligned}$$
(6)

where \(z_1^{(i)}\) and \(w^{(i)}\) denote the sampling points and weights respectively, and n is the dimensions of the state. When the probability mass \(\beta \) of the sampling point \(z_1^{(i)}\) is within the constraint range, the approximate value remains unchanged. Otherwise, re-sample the points to ensure that the approximated average value falls within the convex boundary, thereby reducing the dispersion. Therefore, parameter \(\beta \) determines whether the sampling point is valid. Use the following equation to select sampling points:

$$\begin{aligned} {h^{(i)}} = \left\{ {\begin{array}{*{20}{l}} {{u_{{z_1}}}}&{}{i = 0} \\ {{u_{{z_1}}} + s_\beta ^{1/2}{{[ {C_{{z_1}}^{{1/2}}} ]}_i}}&{}{i = 1, \ldots ,n}\\ {{u_{{z_1}}} - s_\beta ^{{1/2}}{{[ {C_{{z_1}}^{{1/2}}} ]}_i}}&{}{i = n + 1, \ldots ,2n} \end{array}} \right. \end{aligned}$$
(7)

where n indicates the dimension of position variable, and \(S_\beta \) is the confidence scale that satisfies \(\Pr \left( {s \le {s_\beta }} \right) = \beta \) and \(s = {\left( {{z_1} - {u_{{z_1}}}} \right) ^\mathrm{T}}C_{{z_1}}^{ - 1}\left( {{z_1} - {u_{{z_1}}}} \right) \).

The deterministic sampling method will select \(2n+1\) sampling points, and the sampling points that do not satisfy the constraint conditions will be orthogonally projected to the constraint boundary. The following equation is used to sample:

$$\begin{aligned} z_1^{(i)} = \left\{ {\begin{array}{*{20}{c}} {{h^{(i)}}}&{}{\mathrm{{if}}\left\| {{h^{(i)}}} \right\| \le {S_k}}\\ {\frac{{{S_k}}}{{\left\| {{h^{(i)}}} \right\| }}{h^{(i)}}}&{}{\mathrm{{others}}} \end{array}} \right. i = 0, \ldots ,2n \end{aligned}$$
(8)

The weight of the sampling point is updated as

$$\begin{aligned} {w^i} = \left\{ {\begin{array}{*{20}{l}} 1 - \frac{n}{s_\beta },&{} i = 0\\ \frac{1}{2s_\beta },&{} i = 1,...,2n \end{array}} \right. \end{aligned}$$
(9)

The posteriori mean and covariance can be obtained by the weighted average of the re-sampled sampling points \(\{ {z_1^{(i)}} \}_{i = 0}^{2n}\).

Gaussian Condensation Filter based on Error-Ellipse Re-Sampling (EER-GCF) firstly makes a rough estimation of the robot state according to the center point of a sampling set. Then, two error-ellipses with different scales of \(3\sigma \) and \(\sigma \) are established, and we achieve a re-sampling algorithm by hierarchical screening. After obtaining the mutual information between swarm robots, the spatial distance constraint c is established. \(\hat{u}_{r|c}\), which is closer to the real position, is obtained based on constrained Bayesian optimization. Therefore, the position estimation \(\hat{u}_{r|c}\) optimized by the spatial constraint is considered to update the center point \((x_p,y_p)\). Furthermore, the filter estimation at the next moment will obtain the spatial information gain of the previous moment. It can achieve cooperative localization based on the fusion of both temporal and spatial information.

3 Experimental Results

We carried out the location tracking experiment by MATLAB. The simulation runs on a PC with Windows 10 operating system, Intel 4-core i5 CPU, and 16 GB memory. In the experiment, the measurement noise obeys \(\alpha \)-stable distribution. Our proposed swarm robots tracking algorithm, i.e., Cooperative Gaussian Condensation Filter based on Error-Ellipse Re-Sampling (CEER-GCF), integrates spatial distance information to obtain high precision. In order to verify its effectiveness, the random walking experiments were repeated 100 times considering CPF [11], CGCF, and CEER-GCF. Furthermore, the statistical positioning errors of all-mentioned algorithms are compared with PCRLB under the same noise condition. The results are shown in Fig. 1, where the following conclusions could be drawn:

Fig. 1.
figure 1

RMSE of different algorithms in swarm robots tracking.

  1. 1.

    The CRMSE curves of all-mentioned cooperative algorithms are stable in general, which shows that the cooperative method could effectively fuse the information of multiple robots, namely cooperative algorithms have higher stability.

  2. 2.

    The positioning accuracy of CPF reaches 0.39 m, while that of CGCF is 0.32 m, and that of CEER-GCF reaches 0.27 m. CEER-GCF has higher accuracy, and the CRMSE curve of CEER-GCF is much closer to the cooperative PCRLB. It verifies the effectiveness of CEER-GCF in cooperative tracking.

Fig. 2.
figure 2

The influence of the number of swarm robots on the performance of the algorithm

Figure 2 shows how the number of swarm robots impacts the performance of different cooperative tracking algorithms, from which we can conclude that:

  1. 1.

    With the increase of the number of robots, the positioning error curve is smooth and CEER-GCF is of more accuracy. Therefore, when the larger number of robots is, the algorithm described in this paper is more effective, which is quite suitable for applications with large-scale deployment.

  2. 2.

    With the increase of the number of robots, the execution time of all algorithms gradually grows and that of CEER-GCF is slightly higher than other algorithms. However, it can satisfy the real-time requirements of general systems.

4 Conclusion

In this paper, EER-GCF is proposed to solve the problem of non-Gaussian noise in robot tracking. To suppress the cumulative errors of inertial robot tracking, CEER-GCF is proposed. We take the information gain in temporal series as the prior knowledge of spatial cooperative optimization and establish the distance constraint between swarm robots. Then, we obtain the state optimization based on spatial distance constraints. Finally, we realize swarm robots tracking of spatial-temporal fusion. Results verify that the CEER-GCF algorithm can achieve high-precision positioning in harsh environments.