1 Introduction

The Internet of Things (IoT) [18], which has gained widespread acceptance recently, represents a technological revolution in the field of future networks, mobile computing, and wireless communications. Future IoT may establish over the architecture of wireless sensor network (WSN) [914], which consists of hundreds of sensor nodes performing distributed sensing and collaborative computing. By deploying sensors on or around physical objects, the IoT seamlessly integrates a world of networked smart objects, makes their information be shared on a global scale, and provides an ability of intelligent computing and information processing, such as reporting status, position, and surrounding condition of each sensor node. There are various application domains which have already benefited from the IoT, especially in areas of industrial production, logistics, management, and civilian life. As a new type of Internet application, the IoT brings a new ubiquitous computing era, which will extremely change people’s life [15].

Localization and tracking of moving targets, which is considered as a conventional and classical problem, have been already studied for several years in many fields, including wireless cellular network-based mobile user localization [16], simultaneous localization and mapping (SLAM) systems [17], global positioning systems (GPS) [18], Ad Hoc network-based localization [19] and the ultrasonic-based object localization [20], etc. However, solutions in the above fields may not directly suit an IoT scenario where large quantities of sensor nodes which perform distributed sensing and collaborative information processing tasks are interconnected together over a wireless channel [21].

With the rapid development of WSN, sensor-based localization and tracking have attracted many research efforts in recent years [2125], most of which transform a localization and tracking problem into target state estimation by utilizing Kalman filtering [2628]—a prevailing method for numerous nonlinear estimation, advanced signal processing and machine learning applications. As a candidate of improved Kalman filtering schemes, the extended Kalman filtering (EKF) is employed in [29], in which the EKF has successfully combined different sensor data to achieve suboptimal estimation of target states. However, the EKF has two inherent flaws [30]: on one hand, it uses only the first-order linearization to update the mean and covariance of state variables, while ignoring higher order terms derived from Taylor-series expansion, which results in only the first-order accuracy; on the other hand, the EKF needs to calculate the Jacobian matrix, which has proven to be a complicated task. Some subsequent studies [3035] employ the unscented Kalman filtering (UKF) as an alternative to the EKF for solving target localization and tracking problem. By utilizing a deterministic “sampling” approach to calculate mean and covariance terms with higher accuracy, the UKF outperforms the EKF in terms of estimation and prediction with no burden of calculation of Jacobian matrix [36]. Although the UKF performance is far more promising for target localization and tracking, there still exist some numerical problems such as loss of positive definite of the state covariance matrix, which derives from rounding errors and may lead to filtering failure [37]. To solve this problem, square-root UKF (SR-UKF) is introduced for target localization and tracking [37, 38], in which the square root of covariance matrix is used in propagating process so as to ensure the nonnegative definite of the state covariance matrix.

Based on the studies above, we propose an SR-UKF-based localization and tracking algorithm to discover real-time location of a moving target in an IoT environment where there exist quantities of sensors. First, a least-square (LS) criterion-based mathematical model is proposed for localization initialization in an IoT scenario. Next, we employ an SR-UKF idea for the further localization and tracking. By using the data coming from neighboring sensor nodes of the target, real-time location of the moving target can be estimated by implementation of SR-UKF in an iterative fashion so as to achieve target status tracking. Simulation results show that the proposed algorithm achieves good performance in estimation of both position and velocity of the target with either uniform linear motion or variable-speed curve motion. Compared with some existing conventional EKF- or UKF-based methods, the proposed algorithm shows lower localization error under the same computational complexity, which demonstrates its potential significance in ubiquitous computing applications for an IoT environment.

The rest of this paper is organized as follows: Section II presents a localization initialization model in the IoT. The SR-UKF-based localization and tracking algorithm are proposed in Section III. Performance evaluations of our proposed algorithm are presented and discussed in Section IV. Finally, Section V presents our conclusions.

2 Localization initialization model in the IoT

In an IoT scenario, hundreds or thousands of networked sensor nodes are located at the sensing layer of IoT, performing distributed sensing and collaborative data processing tasks. Each sensor node has an ability to sense and report its own state and surrounding conditions. In this work, we uniformly deploy N “location-aware” sensor nodes in an IoT scenario, in which each of them has a priori knowledge of its own location.

Assume that the sensing ability of each sensor node is limited in a round region with a radius R. All sensors periodically detect their own surroundings, and the detection period is usually set to be much smaller than the velocity of moving target. Note that, there are only two possible detection results for each sensor node at a certain moment: 1 or 0. Here, “1” represents that the target has been detected at this moment, while “0” denotes target undetected. When detection result of a sensor node at one moment is different from its result at the adjacent moment before, this sensor node can be considered as an available sensor node at this moment for localization initialization, and the distance between this sensor node and moving target approximates to R at this moment. As is shown in Fig. 1, the sensor node is available for localization initialization at t 2 and t 5.

Fig. 1
figure 1

Available sensor node (at t 2 and t 5)

If there are three available sensor nodes for localization initialization, we can determine initial location area of moving target by utilizing a localization initialization model, as shown in Fig. 2. In this figure, the shadow area represents initial location region of the target. However, we still expect to obtain its precise location information. Here, we employ a least-square (LS) mathematical criterion [39] to calculate initial location point of the moving target.

Fig. 2
figure 2

Localization initialization model in an IoT scenario

We assume that there are n (n ≥ 3) available sensor nodes for localization initialization, as is shown in Fig. 3. Since each sensor node has a priori knowledge of its own location, we have already known location information of these available sensor nodes, which can be represented as (X 1, Y 1), (X 2, Y 2), …, (X n, Y n), while the unknown initial location information of moving target is denoted as (x, y). Since the distance between the moving target and an available sensor node approximates to its sensing radius R as shown in Fig. 1, we can establish the following equations:

$$ \left[ {\begin{array}{*{20}c} {\left( {X_{1} - x} \right)^{2} + \left( {Y_{1} - y} \right)^{2} } \\ {\left( {X_{2} - x} \right)^{2} + \left( {Y_{2} - y} \right)^{2} } \\ \vdots \\ {\left( {X_{n} - x} \right)^{2} + \left( {Y_{n} - y} \right)^{2} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {R_{1}^{2} } \\ {R_{2}^{2} } \\ \vdots \\ {R_{n}^{2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {w_{1} } \\ {w_{2} } \\ \vdots \\ {w_{n} } \\ \end{array} } \right] $$
(1)

where R i denotes the maximum sensing radius of the i-th (i = 1, 2, …, n) available sensor node; w i represents the i-th measurement noise. Subtract the last row of Eq. (1) from other rows, then we get the following equations represented in matrix form:

$$ {\mathbf{U}} \times {\mathbf{S}} = {\mathbf{V}} $$
(2)

where U, S and V are given by:

$$ {\mathbf{U}} = - 2 \times \left[ {\begin{array}{*{20}c} {\left( {X_{1} - x_{n} } \right)} \\ {\left( {X_{2} - x_{n} } \right)} \\ \vdots \\ {\left( {X_{n - 1} - x_{n} } \right)} \\ \end{array} \begin{array}{*{20}c} { \left( {Y_{1} - y_{n} } \right)} \\ { \left( {Y_{2} - y_{n} } \right)} \\ \vdots \\ {\left( {Y_{n - 1} - y_{n} } \right)} \\ \end{array} } \right] $$
(3)
$$ {\mathbf{S}} = \left[ {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right] $$
(4)
$$ {\mathbf{V}} = \left[ {\begin{array}{*{20}c} {\left( {R_{1}^{2} - R_{n}^{2} } \right) + \left( {X_{n}^{2} - X_{1}^{2} } \right) + \left( {Y_{n}^{2} - Y_{1}^{2} } \right) + \left( {w_{1} - w_{n} } \right)} \\ {\left( {R_{2}^{2} - R_{n}^{2} } \right) + \left( {X_{n}^{2} - X_{2}^{2} } \right) + \left( {Y_{n}^{2} - Y_{2}^{2} } \right) + \left( {w_{2} - w_{n} } \right)} \\ \vdots \\ {\left( {R_{n - 1}^{2} - R_{n}^{2} } \right) + \left( {X_{n}^{2} - X_{n - 1}^{2} } \right) + \left( {Y_{n}^{2} - Y_{n - 1}^{2} } \right) + \left( {w_{n - 1} - w_{n} } \right)} \\ \end{array} } \right] $$
(5)
Fig. 3
figure 3

Extended localization initialization model in an IoT scenario

Since Eq. (2) contains measurement noise components which may affect accuracy of its solution, here we use the LS estimator to obtain the estimate of S. The optimal solution (or estimate) of Eq. (2) based on the LS criterion is given by:

$$ {\mathbf{S}}_{LS} = \left( {{\mathbf{U}}^{T} {\mathbf{U}}} \right)^{ - 1} {\mathbf{U}}^{T} {\mathbf{V}} $$
(6)

From Eqs. (16), we obtain initial location information of the moving target, which is given by S LS . After localization initialization discussed above, we should implement further localization and tracking. This gives rise to an SR-UKF-based algorithm, as discussed in the next section.

3 SR-UKF-based localization and tracking algorithm in the IoT

Kalman filters (KF) and their variants are promising solutions for target localization and tracking. However, the standard linear KF methods are usually not suitable for target tracking with a nonlinear observation equation. The extended KF (EKF) [29] methods, which apply the standard linear KF methodology to linearization of a nonlinear system [37], show improved filtering performance. However, the EKF implements only the first-order approximation to a nonlinear system and ignores higher order terms derived from Taylor-series expansion, which degrades its performance when it is used in a nonlinear system. Although the unscented KF (UKF) [3035] is an effective alternative to the EKF for a nonlinear system, it still suffers some numerical problems such as loss of positive definite of the state covariance matrix which derives from rounding errors and may lead to filtering failure [37]. Since the square-root UKF (SR-UKF) [37, 38] outperforms standard KF, EKF and UKF in nonlinear filtering, here we employ the SR-UKF to design localization and tracking algorithm in an IoT environment.

3.1 SR-UKF Principle

For a nonlinear dynamic system, we define the following dynamic state and observation equations for SR-UKF implementation, which are given by:

$$ \left\{ \begin{gathered} {\mathbf{a}}_{k + 1} = {\mathbf{F}}\left( {{\mathbf{a}}_{k} } \right) + {\mathbf{v}}_{k} ,\quad {\text{State Equation}} \hfill \\ {\mathbf{b}}_{k} = {\mathbf{H}}\left( {{\mathbf{a}}_{k} } \right) + {\mathbf{w}}_{k} , \quad {\text{Observation Equation}} \hfill \\ \end{gathered} \right. $$
(7)

where F(·) is defined as the state function, which shows the recursive relationship of parameter a at the current moment (k) and the next moment (k + 1). H(·) denotes the observation function, which provides the relationship between a k and b k . F(·) may be linear or nonlinear depending on application scenarios, whereas H(·) is usually highly nonlinear in SR-UKF. Here, a k and b k are the state and observation vectors at the k-th moment (also at the k-th iteration), respectively. The value of b k can be obtained by observation or measurement, while the value of a k is unknown. v k and w k are state and observation noise vectors, respectively. Usually, the aim of SR-UKF is to estimate the value of a k at each k-the moment (k = 1, 2, 3, 4,…) under the given value of b k .

The SR-UKF principle for state estimation (or prediction) of a k (k = 1, 2, 3, 4,…) can be summarized as follows:

  1. 1.

    Initialization

$$ {\hat{\mathbf{a}}}_{0} = E\left\{ {{\mathbf{a}}_{0} } \right\},\quad {\mathbf{S}}_{0} = chol\left\{ {E\left[ {\left( {{\mathbf{a}}_{0} - {\hat{\mathbf{a}}}_{0} } \right)\left( {{\mathbf{a}}_{0} - {\hat{\mathbf{a}}}_{0} } \right)^{T} } \right]} \right\} $$
(8)

where E{·} denotes the expectation operator; chol{A} is the original Cholesky factor of A [37].

  1. 2.

    The k-th iterative estimation, k ∈ {1,2,…,∞}

  2. (a)

    Sigma point calculation:

$$ {\varvec{\chi}}_{k - 1} = \left[ {{\hat{\mathbf{a}}}_{k - 1} ,{\hat{\mathbf{a}}}_{k - 1} + \sqrt {L + \lambda } {\mathbf{S}}_{k - 1} ,{\hat{\mathbf{a}}}_{k - 1} - \sqrt {L + \lambda } {\mathbf{S}}_{k - 1} } \right] $$
(9)

where L is dimension of the state vector a k ; λ = L(α 2 − 1) is a compound scaling parameter, in which α is a primary scaling factor and is usually set between 10−4 and 1.

  1. (b)

    Time update:

$$ {\varvec{\chi}}_{k|k - 1} = {\mathbf{F}}\left( {{\varvec{\chi}}_{k - 1} } \right) $$
(10)
$$ {\hat{\mathbf{a}}}_{k}^{ - } = \sum\limits_{i = 0}^{2L} {W_{i}^{(m)} {\varvec{\chi}}_{{i,k\left| {k - 1} \right.}} } $$
(11)
$$ {\mathbf{S}}_{{{\mathbf{a}}_{k}^{{}} }} = qr\left\{ {\left[ {\sqrt {W_{1}^{(c)} } \left( {{\varvec{\chi}}_{{1,k\left| {k - 1} \right.}} - {\hat{\mathbf{a}}}_{k}^{ - } } \right), \ldots ,\sqrt {W_{2L}^{(c)} } \left( {{\varvec{\chi}}_{{2L,k\left| {k - 1} \right.}} - {\hat{\mathbf{a}}}_{k}^{ - } } \right),\sqrt {{\mathbf{Q}}_{{}}^{{\mathbf{v}}} } } \right]} \right\} $$
(12)
$$ {\mathbf{S}}_{{{\mathbf{a}}_{k}^{{}} }} = cholupdate\left\{ {{\mathbf{S}}_{{{\mathbf{a}}_{k}^{{}} }} ,{\varvec{\chi}}_{0,k} - {\hat{\mathbf{a}}}_{k}^{ - } ,W_{0}^{(c)} } \right\} $$
(13)

where Q v denotes covariance matrix of the state noise vector v k ; {W i } is a set of scalar weights, which is given by:

$$ W_{0}^{(m)} = {\lambda \mathord{\left/ {\vphantom {\lambda {\left( {L + \lambda } \right)}}} \right. \kern-0pt} {\left( {L + \lambda } \right)}};W_{0}^{(c)} = {\lambda \mathord{\left/ {\vphantom {\lambda {\left( {L + \lambda } \right)}}} \right. \kern-0pt} {\left( {L + \lambda } \right)}} + \left( {1 - \alpha^{2} + \beta } \right) $$
(14)
$$ W_{i}^{(m)} = W_{i}^{(c)} = {1 \mathord{\left/ {\vphantom {1 {\left[ {2\left( {L + \lambda } \right)} \right]}}} \right. \kern-0pt} {\left[ {2\left( {L + \lambda } \right)} \right]}};\quad i = 1, \ldots ,2L $$
(15)

where β is a secondary scaling factor. Generally, β = 2 is optimal for Gaussian distribution; qr{A} represents the upper triangular part of matrix R (R results from the QR decomposition of matrix A); cholupdate{A,B,c} is available in Matlab as “cholupdate”. If A = chol{D} is the original Cholesky factor of D, then cholupdate{A,B,c} returns the Cholesky factor of D + cBB T.

  1. (c)

    Measurement update:

$$ {\varvec{\gamma}}_{{k\left| {k - 1} \right.}} = {\mathbf{H}}\left( {{\varvec{\chi}}_{{k\left| {k - 1} \right.}} } \right) $$
(16)
$$ {\hat{\mathbf{b}}}_{k}^{ - } = \sum\limits_{i = 0}^{2L} {W_{i}^{(m)} {\varvec{\gamma}}_{{i,k\left| {k - 1} \right.}} } $$
(17)
$$ {\mathbf{S}}_{{{\mathbf{b}}_{k} }} = qr\left\{ {\left[ {\sqrt {W_{1}^{(c)} } \left( {{\varvec{\gamma}}_{{1,k\left| {k - 1} \right.}} - {\hat{\mathbf{b}}}_{k}^{ - } } \right), \ldots ,\sqrt {W_{2L}^{(c)} } \left( {{\varvec{\gamma}}_{{2L,k\left| {k - 1} \right.}} - {\hat{\mathbf{b}}}_{k}^{ - } } \right),\sqrt {{\mathbf{Q}}^{{\mathbf{w}}} } } \right]} \right\} $$
(18)
$$ {\mathbf{S}}_{{{\mathbf{b}}_{k} }} = {\text{cholupdate}}\left\{ {\left[ {{\mathbf{S}}_{{{\mathbf{b}}_{k} }} ,{\varvec{\gamma}}_{0,k} - {\hat{\mathbf{b}}}_{k}^{ - } ,W_{0}^{(c)} } \right]} \right\} $$
(19)
$$ {\mathbf{P}}_{{{\mathbf{a}}_{k} {\mathbf{b}}_{k} }} = \sum\limits_{i = 0}^{2L} {W_{i}^{(c)} \left[ {{\varvec{\chi}}_{{i,k\left| {k - 1} \right.}} - {\hat{\mathbf{a}}}_{k}^{ - } } \right]} \left[ {{\varvec{\gamma}}_{{i,k\left| {k - 1} \right.}} - {\hat{\mathbf{b}}}_{k}^{ - } } \right]^{T} $$
(20)
$$ {\varvec{\kappa}}_{k} = {\mathbf{P}}_{{{\mathbf{a}}_{k} {\mathbf{b}}_{k} }} \left( {{\mathbf{S}}_{{{\mathbf{b}}_{k} }} {\mathbf{S}}_{{{\mathbf{b}}_{k} }}^{T} } \right)^{ - 1} $$
(21)
$$ {\hat{\mathbf{a}}}_{k} = {\hat{\mathbf{a}}}_{k}^{ - } + {\varvec{\kappa}}_{k} \left( {{\mathbf{b}}_{k} - {\hat{\mathbf{b}}}_{k}^{ - } } \right) $$
(22)
$$ {\mathbf{U}}_{k} = {\varvec{\kappa}}_{k} {\mathbf{S}}_{{{\mathbf{b}}_{k} }} $$
(23)
$$ {\mathbf{S}}_{k} = {\text{cholupdate}}\left\{ {{\mathbf{S}}_{{{\mathbf{a}}_{k} }} ,{\mathbf{U}}_{k} , - 1} \right\} $$
(24)

where Q w denotes covariance matrix of the observation noise vector w k .

The SR-UKF performs the above procedure in an iterative fashion so as to achieve improved performance of state estimation of a k (k = 1, 2, 3, 4,…).

3.2 SR-UKF-based localization and tracking algorithm

Here, we employ the above SR-UKF procedure to solve localization and tracking problems in an IoT scenario. In order to describe movement state of a moving target in a sensor-based IoT environment, we define the following dynamic state and observation equations.

3.2.1 State equation

$$ \begin{aligned} {\mathbf{a}}_{k + 1} &= {\mathbf{F}}\left( {{\mathbf{a}}_{k} } \right) + {\mathbf{v}}_{k} \\ &= {\mathbf{\Upphi a}}_{k} + {\mathbf{v}}_{k} \\ \end{aligned} $$
(25)

where a k denotes location and velocity information of moving target at the k-th moment; v k represents the state noise vector; Φ is a state transition matrix. Parameters in Eq. (25) should be defined in accordance with motion pattern of the target, while the target may move in any fashion. Without loss of generality, here we consider two situations: uniform linear motion and variable-speed curve motion.

When the target moves in a uniform linear motion (ULM), parameters in Eq. (25) are given by:

$$ {\mathbf{a}}_{k} = \left[ {\begin{array}{*{20}c} {x_{k} } & {y_{k} } & {x^{\prime}_{k} } & {y^{\prime}_{k} } \\ \end{array} } \right]^{T} $$
(26)
$$ {\varvec{\Upphi}}_{ULM} = \left[ {\begin{array}{*{20}c} {{\mathbf{I}}_{2 \times 2} } & {T{\mathbf{I}}_{2 \times 2} } \\ {{\mathbf{0}}_{2 \times 2} } & {{\mathbf{I}}_{2 \times 2} } \\ \end{array} } \right] $$
(27)

where [x k , y k ] is location coordinates of target at the k-th moment; [x k , y′ k ] represents the derivative of [x k , y k ], which also stands for the velocity in the x- and y-axis direction; I 2×2 and 0 2×2 denotes the 2 × 2 unit matrix and all-zero matrix, respectively; T is time interval of sensor detection.

When the target moves in a variable-speed curve motion (VCM), parameters in Eq. (25) are given by:

$$ {\mathbf{a}}_{k} = \left[ {\begin{array}{*{20}c} {x_{k} } & {y_{k} } & {x_{k}^{\prime } } & {y_{k}^{\prime } } & {x_{k}^{\prime \prime } } & {y_{k}^{\prime \prime } } \\ \end{array} } \right]^{T} $$
(28)
$$ {\varvec{\Upphi}}_{VCM} = \left[ {\begin{array}{*{20}c} {{\mathbf{I}}_{2 \times 2} } & {T{\mathbf{I}}_{2 \times 2} } & {\frac{1}{2}T^{2} {\mathbf{I}}_{2 \times 2} } \\ {{\mathbf{0}}_{2 \times 2} } & {{\mathbf{I}}_{2 \times 2} } & {T{\mathbf{I}}_{2 \times 2} } \\ {{\mathbf{0}}_{2 \times 2} } & {{\mathbf{0}}_{2 \times 2} } & {{\mathbf{I}}_{2 \times 2} } \\ \end{array} } \right] $$
(29)

where x k and y k represents the second-order derivative of x k and y k , which also stands for the acceleration in the x- and y-axis direction.

3.2.2 Observation equation

$$ \begin{aligned} {\mathbf{b}}_{k} &= {\mathbf{H}}\left( {{\mathbf{a}}_{k} } \right) + {\mathbf{w}}_{k} \\ &= \left[ {\begin{array}{*{20}c} {\sqrt {\left( {X_{1} - x_{k} } \right)^{2} + \left( {Y_{1} - y_{k} } \right)^{2} } } \\ {\sqrt {\left( {X_{2} - x_{k} } \right)^{2} + \left( {Y_{2} - y_{k} } \right)^{2} } } \\ \vdots \\ {\sqrt {\left( {X_{M} - x_{k} } \right)^{2} + \left( {Y_{M} - y_{k} } \right)^{2} } } \\ \end{array} } \right] + {\mathbf{w}}_{k} \\ \end{aligned} $$
(30)

where (X 1, Y 1), (X 2, Y 2), …, (X M , Y M ) represent location coordinates of M sensor nodes, which are available for localization and tracking at the k-th moment, respectively; b k  = [R 1, R 2,…, R M ]T represents distances between the target and available sensor nodes at the k-th moment, where R i denotes the maximum sensing radius of the i-th (i = 1, 2,…, M) available sensor node; w k is the observation noise vector.

The SR-UKF-based localization and tracking algorithm can be implemented as follows:

  1. 1.

    Initialization

At the initialization stage, we use the proposed localization initialization model proposed in Section II to obtain initial location coordinates of target [x 0, y 0]. For simplicity, the initial velocity [x’ 0, y’ 0] is given when the target moves in a uniform linear motion, while the initial acceleration [x 0″, y 0″] is also given for a variable-speed curve motion. So the initial state vector is given by:

$$ {\mathbf{a}}_{0} = \left[ {\begin{array}{*{20}c} {x_{0} } & {y_{0} } & {x_{0}^{\prime } } & {y_{0}^{\prime } } \\ \end{array} } \right]^{T} \quad \left( {{\text{see Eq}}.\,( 26){\text{for ULM}}} \right) $$
(31)
$$ {\mathbf{a}}_{0} = \left[ {\begin{array}{*{20}c} {x_{0} } & {y_{0} } & {x_{0}^{\prime } } & {y_{0}^{\prime } } & {x_{0}^{\prime \prime } } & {y_{0}^{\prime \prime } } \\ \end{array} } \right]^{T} \quad \left( {{\text{see Eq}}.\,(28)\,{\text{for VCM}}} \right) $$
(32)

Based on several initialization tests, we acquire a group of initial values of a 0, and then, we can calculate \( {\hat{\mathbf{a}}}_{0} \) and S 0 by using Eq. (8).

  1. 2.

    The k-th iterative Estimation, k ∈ {1, 2,…,∞}

  2. (a)

    Sigma point calculation

Substitute \( {\hat{\mathbf{a}}}_{k - 1} \) and \( {\mathbf{S}}_{k - 1} \) into Eq. (9), then we get \( {\varvec{\chi}}_{k - 1} \).

  1. (b)

    Time update

By utilizing Eqs. (1013), we get \( {\varvec{\chi}}_{{k\left| {k - 1} \right.}} \), \( {\hat{\mathbf{a}}}_{k}^{ - } \) and \( {\mathbf{S}}_{{{\mathbf{a}}_{k}^{{}} }} \). Note that, F(·) in Eq. (10) should be replaced by Eq. (25).

  1. (c)

    Measurement update

According to Eqs. (1624), we get the estimate \( {\hat{\mathbf{a}}}_{k}^{{}} \) and \( {\mathbf{S}}_{k} \) at the k-th moment. Note that, H(·) in Eq. (16) should be replaced by Eq. (29).

Based on the above procedure which performs in an iterative fashion, we get the estimates of location and velocity information of the moving target at any moment, which indicates realization of target localization and tracking in an IoT scenario. Figure 4 shows the whole procedure of the proposed SR-UKF-based localization and tracking algorithm.

Fig. 4
figure 4

Flow chart of the proposed SR-UKF-based algorithm

3.3 Computational complexity

Assume that the dimension of the state vector a k and the observation vector b k in Eqs. (25, 29) is L and M, respectively. In the whole algorithm, its computational complexity mainly comes from two complicated operations: QR decomposition and cholupdate, which are given by Eqs. (12, 13, 18 and 19). For simplicity and without loss of generality, here we only consider computational complexity, which comes from these two operations. The QR decomposition of matrix in Eqs. (12, 18) brings the complexity of O((3L + 1)L 2); the function of cholupdate in Eqs. (13, 19) brings the complexity of O(2LM 2). Generally, L is larger than M, so the complexity of SR-UKF-based localization and tracking algorithm is O(L 3), which is equal to the complexity of both the EKF- and UKF-based methods [37].

4 Simulation results and analysis

4.1 Simulation parameters

We consider a square sensor field (1,000 m × 1,000 m) which has (−500, −50 m), (−500, 500 m), (500, −500 m) and (500, 500 m) as its four corners. Fifty sensor nodes which are used for target detection are uniformly and randomly deployed in this field. We assume that the sensing ability of each sensor node is limited in a round region whose radius is 300 m. T = 1s is the time interval of sensor detection. Processing requirements in terms of time for the whole IoT architecture can be satisfied by using common processors such as MSP430 (Mixed Signal Processor 430), where it is used as 16 MHz clock with 16 bits CPU.

4.2 Performance assessment

4.2.1 Case 1: The target moves in a uniform linear motion (ULM)

  1. (a)

    Localization Initialization

We assume that the initial location coordinate and velocity of the moving target are (−400, 400 m) and (x 0′ = 25 m/s, y 0′ = −25 m/s), respectively, which is given by:

$$ \begin{aligned} {\mathbf{a}}_{0} &= \left[ {\begin{array}{*{20}c} {x_{0} } & {y_{0} } & {x_{0}^{\prime } } & {y_{0}^{\prime } } \\ \end{array} } \right]^{T} \\ &= \left[ {\begin{array}{*{20}c} { - 400\;{\text{m}}} & {400\;{\text{m}}} & {25\;{\text{m}}/{\text{s}}} & { - 25\;{\text{m}}/{\text{s}}} \\ \end{array} } \right]^{T} \\ \end{aligned} $$
(33)
  1. (b)

    Estimation of target trajectory

Figure 5 compares target trajectories obtained from the three algorithms, which are the EKF [29], the UKF [3035] and the proposed SR-UKF-based localization and tracking algorithms, respectively. The black line in the figure means the real moving trajectory of the target. The lines with marks are trajectory estimates by the three algorithms. From this figure, we find that both the UKF and the proposed algorithms show identical and better trajectory estimation performance when the target moves in a uniform linear motion. The EKF performs poorly compared with the UKF and the proposed, which verifies the superiority of UKF-based methods to the EKF-based methods. The reason is that the linearization approach of the EKF results only in the first-order accuracy, while approximations of the UKF-based methods are accurate to at least the second order [37].

Fig. 5
figure 5

Comparison of linear trajectory estimation

  1. (c)

    Estimation of location point coordinates in the X-/Y-axis direction

Figures 6 and 7 compare location estimation errors of the three algorithms in the X-axis and Y-axis directions, respectively. Obviously, the EKF exhibits a worse result in localization, and its location estimation error is limited to about 20 m, while errors of the UKF-based and the proposed algorithms are less than about 10 and 3 m, respectively. Moreover, our algorithm shows slightly better estimation performance for target position than the UKF-based algorithm in Figs. 6 and 7, in which we find that the positioning accuracy of the UKF-based algorithm is less than 10 m, while that of the proposed algorithm is less than 3 m. Since the square-root (SR) filter employed in our algorithm ensures a nonnegative definite characteristic of the state covariance matrix, the proposed algorithm shows its superiority.

Fig. 6
figure 6

Comparison of location estimation errors in the X-axis direction

Fig. 7
figure 7

Comparison of location estimation errors in the Y-axis direction

  1. (d)

    Estimation of velocity in the X-/Y-axis direction

Figure 8 and 9 compare velocity estimation errors of the three algorithms in the X- and Y-axis directions, respectively. From the figure, we find that velocity estimates of all the three algorithms achieve convergence only after several seconds. However, the proposed algorithm shows the best velocity tracking since its velocity estimation error limits to less than about 0.2 m/s in both X- and Y-axis directions.

Fig. 8
figure 8

Comparison of velocity estimation errors in the X-axis direction

Fig. 9
figure 9

Comparison of velocity estimation errors in the Y-axis direction

4.2.2 Case 2: The target moves in a variable-speed curve motion (VCM)

  1. (a)

    Localization Initialization

According to the basic physical laws of the movement of objects, difference of acceleration values between the X- and Y-axis directions results in a variable-speed curve motion. Here, we assume that the initial location coordinate, velocity and acceleration of the moving target are (–400, 400 m), (x 0′ = 25, y 0′ = −25 m/s) and (x 0  = 0, y 0  = −1 m/s2), respectively. Note that, acceleration values here (x 0 and y 0 ) are different in the X- and Y-axis directions, so that the target moves in a variable-speed curve motion. The initial location information is summarized by:

$$ \begin{aligned} {\mathbf{a}}_{0} &= \left[ {\begin{array}{*{20}c} {x_{0} } & {y_{0} } & {x_{0}^{\prime } } & {y_{0}^{\prime } } & {x_{0}^{\prime \prime } } & {y_{0}^{\prime \prime } } \\ \end{array} } \right]^{T} \\ &= \left[ {\begin{array}{*{20}c} { - 400\;{\text{m}}} & {400\;{\text{m}}} & {25\;{\text{m/s}}} & {0\,{\text{m/s}}} & {0{\text{m/s}}^{2} } & { - 1\;{\text{m}}/{\text{s}}^{2} } \\ \end{array} } \right]^{T} \\ \end{aligned} $$
(34)
  1. (b)

    Estimation of target trajectory

Figure 10 compares target trajectories recovered from the three algorithms. Obviously, all the three recovered curves presented with marks are close to the real trajectory curve, which means all the three algorithms perform well in target localization and tracking when the target moves in a variable-speed curve motion. However, both the UKF and the proposed algorithms show identical and superior trajectory estimation performance compared with the EKF-based algorithm, which is similar to the previous conclusion in a uniform linear motion. This demonstrates that the proposed algorithm can be applied for various occasions where the target moves in different motions.

Fig. 10
figure 10

Comparison of curve trajectory estimation

  1. (c)

    Estimation of location point coordinates in the X-/Y-axis direction

Figures 11 and 12 compare positioning errors of different algorithms in the X- and Y-axis directions, respectively. From the two figures, we may find that the proposed algorithm keeps on the best positioning performance among the three. Its positioning errors remain the same as those in Figs. 6 and 7 (that is: less than 3 m), which also demonstrates movement states of objects have little influence on positioning estimation performance of the proposed algorithm.

Fig. 11
figure 11

Comparison of location estimation errors in the X-axis direction

Fig. 12
figure 12

Comparison of location estimation errors in the Y-axis direction

  1. (d)

    Estimation of velocity in the X-/Y-axis direction

Figures 13 and 14 compare velocity estimation errors of the three algorithms in the X-axis and Y-axis directions, respectively. From the figures, we find that both the velocity estimation convergence and estimation errors of the proposed algorithm in a variable-speed curve motion case are similar to those in a uniform linear motion case, which demonstrates velocity states of objects have little influence on tracking performance of the proposed algorithm. Although there exist acceleration in the X- and Y-axis directions for the variable-speed motion, velocity estimation accuracy of the proposed algorithm still remain the best among three algorithms, which is less than about 0.2 m/s.

Fig. 13
figure 13

Comparison of velocity estimation errors in the X-axis direction

Fig. 14
figure 14

Comparison of velocity estimation errors in the Y-axis direction

5 Conclusions

In order to achieve real-time localization and tracking of a moving target, we have proposed a square-root unscented Kalman filtering (SR-UKF)-based algorithm, which is suitable for an IoT environment where hundreds or thousands of networked sensor nodes are located at the sensing layer for distributed sensing and collaborative data processing. The data generated from wireless sensor nodes of the IoT make contributions to localization and tracking of the moving target. Based on the architecture of wireless sensor network (WSN), we first present a least-square (LS) criterion-based mathematical model for localization initialization in the IoT scenario. Then, we employ an SR-UKF idea for the further localization and tracking. By using the data coming from neighboring sensor nodes near the target, real-time location and velocity of the moving target can be estimated by implementation of SR-UKF in an iterative fashion so as to achieve target status tracking. Simulation results show that the proposed algorithm achieves good performance in estimation of both position and velocity of the target with either uniform linear motion or variable-speed curve motion. Compared with some conventional extended Kalman filtering (EKF) or UKF-based methods, the proposed algorithm shows lower location/velocity estimation error under the same computational complexity. Take our simulation for example, it achieves more precise localization and tracking. Under our simulation parameter setting, the errors of its location and velocity estimation can be limited to less than about 3 m and 0.2 m/s, respectively, which has already satisfied the demands for conventional and daily localization in campus, buildings and other public regions. In summary, this work is of significance for sensor-based localization and tracking applications.