1 Introduction

Source localization, has been attractive in many research fields such as wireless sensor networks (WSNs), mobile communication, microphone array, radio astronomy [1,2,3,4,5,6,7,8,9,10,11,12,13]. Several techniques are available for source localization, including time of arrival (TOA) [14, 15], time difference of arrival (TDOA) [10, 13], angle of arrival (AOA) [16, 17], received signal strength (RSS) [16, 18], frequency doppler, and combination of these techniques [7,8,9, 19, 20]. Among these methods, the TOA and TDOA based methods usually provide better results but they are expensive to implement [9]. On the other hand, the AOA based method, requires sensors with multiple antennas or rotating directional antenna to obtain the AOA of source [21]. In contrast, the energy based methods using the received signal strength (RSS) or the received signal strength differences (RSSD) are attractive due to their low cost and simple implementation [22,23,24,25]. Unlike the RSS-based localization, the RSSD based localization has not been adequately investigated. In RSSD method the difference of the powers measured by a pair of sensors, in a homogenous environment defines a circle where the source could lay on it. As a result, the source location is obtained using the intersection of at least two of such circles [25]. Since the signal power measurements are noisy, therefore instead of the single cross-point, we encounter with ambiguous areas. The source location estimation methods attempt to deal with these uncertainty areas in different ways. There are several well-known RSSD-based methods for estimating the location of source. A linear least squares (LLS) technique was developed by Liu in [22]. In [3], a minimax optimization and semidefinite programming was applied to estimate the source location which is efficient for large signal-to-noise ratio scenarios. Nonlinear least squares (NLS) method was developed in [25]. A weighted least squares (WLS) method using the unscented transformation (UT) was derived in [26]. An important issue that can severely affect the performance of any localization algorithm is the relative sensor-source geometry (SSG). In [27, 28], the conditions that lead to the optimal sensor geometry for AOA-localization was derived. In [29], the optimal geometries of a group of sensors for TDOA-based localization using both centralized and non-centralized method was addressed. Two-dimensional sensor placement using time difference of arrival measurements was studied in [30]. Optimal sensor arrangement based on RSS was also studied in [31]. There are also papers that have analyzed the optimal SSG in heterogeneous sensor networks [32]. For optimal sensor placement based on the TDOA, AOA and RSS measurements three optimality criteria are mainly considered in literature which includes maximization of determinant of fisher information matrix (FIM), maximization of the smallest eigenvalue of FIM, and minimization of the trace of the inverse of FIM. It is important that using any of these criteria lead to different result for optimal sensor placement. In this paper we study the optimal sensor-source geometry for RSSD based source localization. For this issue, at the first we derive the FIM matrix then we use the determinant of FIM as our desired criterion to obtain the optimal SSG. After that, we use this criteria in various scenarios and obtain the conditions which lead to the optimal SSG. The rest of this paper is organized as follows. In Sect. 2, we introduce our assumed system model and the required formulations for RSSD measurements. In Sect. 3, we derive the FIM for our localization problem. Based on the determinant of FIM, the optimal sensor placement is discussed in Sect. 4. Simulation results and analysis are provided in Sect. 5. Finally, Sect. 6 presents the conclusion of this study.

Throughout this paper, vectors and matrices are shown with bold lowercase and bold uppercase letters; respectively. \(|\mathbf {A}|\) shows the determinant of matrix \(\mathbf {A}\), \(\Vert \cdot \Vert \) denotes the Euclidean norm, and \(\Vert \mathbf {a}\Vert _\mathbf {H}^2 \mathop = \limits ^\varDelta \mathbf {a}^\mathbf {T}\mathbf {H}\mathbf {a}\). The (ij)-th entry of matrix \(\mathbf {A}\) is denoted by \([\mathbf {A}]_{i,j}\), \([\mathbf {a}]_i\) represents the ith element of vector \(\mathbf {a}\), and superscript \({\cdot }^T\) denotes the transpose of a vector or a matrix. Finally, \(E\{\cdot \}\) is used for the statistical mean of a random variable.

2 The RSSD signal model

We consider a stationary source with unknown location at \(\mathbf {p}=[x \ y]^T\in {\mathbb {R}}^2 \) and \(N\geqslant 3\) sensors at known locations \(\mathbf {s}_i=[x_i \ y_i]^T\) for \(i=1,2,\ldots ,N \). Our localization scenario is illustrated in Fig.  1.

Fig. 1
figure 1

An example of a system of sensors for finding the location transmitter in the region

The distance between the source and the ith sensor is \(r_i=\Vert \mathbf {p}-\mathbf {s}_i\Vert \) and \( \theta _i=\angle (\mathbf {p}-\mathbf {s}_i)\) is the bearing angle between the source and the ith sensor. The RSS at the ith sensor in the absence of noise whenever the environment is homogenous can be modeled as [33]

$$\begin{aligned} P_i = {K_0}\frac{P_t}{r_i^{\gamma }} \quad \text{ for } i = 1,2,\ldots ,N \end{aligned}$$
(1)

where \(P_t\) denotes the source transmit power, \(\gamma \) is the path loss exponent, and \(K_0\) is the loss component that depends on the system factor. The path loss exponent \(\gamma \) is a function of the environment and can be a vlaue between 1 and 6 and we suppose that it is known in this work. Assuming a log-normal distribution for \(P_i\), then its measured value in dB is [23]

$$\begin{aligned} {\widetilde{P}}_i =10\log (K_0) + P_t - 10\gamma \log _{10} r_i + \varepsilon _i \quad \text{ for } i = 1,\ldots ,N. \end{aligned}$$
(2)

Here, \(\varepsilon _i\) is the added term due to the noisy measurement which is assumed to be zero-mean Gaussian with variance \(\sigma _i^2\), where \(\text{E}\{\varepsilon _ i \varepsilon _ j\}=0\) for \(i\ne j\). In our assumed system without loss of generality if we take the Nth sensor as the reference sensor, then the RSSD between the ith sensor and the reference sensor will be

$$\begin{aligned} {\widetilde{\varDelta }}_i \mathop = \limits ^\varDelta {\widetilde{P}}_i - {\widetilde{P}}_N + n_i = 10\gamma \log _{10} \left( \frac{r_N}{r_i}\right) + n_i,\quad \text{ for } i = 1,\ldots ,N. \end{aligned}$$
(3)

In the above relation, \(n_i \mathop = \limits ^\varDelta \varepsilon _i - \varepsilon _N\) is the RSSD measurement noise which is the zero-mean Gaussian with variance \((\sigma _i^2+\sigma _N^2)\).

Let us define

$$\begin{aligned} \mathbf {\varDelta } \mathop = \limits ^\varDelta&\left[ 10\gamma \log _{10} \left( \frac{r_N}{r_1} \right) \ \ \cdots \ \ 10\gamma \log _{10} \left( \frac{r_N}{r_{N-1}} \right) \right] ^T \end{aligned}$$
(4)
$$\begin{aligned} \mathbf {n}\mathop = \limits ^\varDelta&\left[ n_1 \ \ \cdots \ \ n_{N-1} \right] ^T \end{aligned}$$
(5)

As a result, the RSSD measurement vector can be written in the following vector form

$$\begin{aligned} \widetilde{\mathbf {\varDelta }} = \left[ \begin{array}{c}{\widetilde{\varDelta }}_1 \\ \vdots \\ {\widetilde{\varDelta }}_{N-1} \end{array} \right] = \mathbf {\varDelta } + \mathbf {n}. \end{aligned}$$
(6)

Since \(E\{\varepsilon _ i\varepsilon _ j\}=0\) for \(i\ne j\) and using the following relation

$$\begin{aligned} E\{n_i n_j\} =E \left\{ (\epsilon _ i-\epsilon _N)(\epsilon _ j-\epsilon _N) \right\} = \left\{ \begin{array}{lc} \sigma _i^2+\sigma _N^2 &{}\quad \text{ for } i=j \\ \sigma _i^2 &{}\quad \text{ for } i\ne j \end{array}\right. \end{aligned}$$
(7)

the covariance matrix of noise vector \(\mathbf {n}\) can be obtained as

$$\begin{aligned} {\varvec{\varSigma }}= E \{{\mathbf {n}} {\mathbf {n}}^T\} = \sigma _n^2 \mathbf{1}+ \text{diag}\left( \sigma _1^2,\ldots ,\sigma _{N-1}^2 \right) , \end{aligned}$$
(8)

where 1is the \((N-1) \) square matrix with all entries one, and \(\text{diag}(\cdot )\) represents the diagonalization operator.

3 FIM based on RSSD

In this section we derive the FIM matrix and Cramer-Rao bound (CRB) matrix for our localization problem. If \(\mathbf {x}\) is an unknown vector to be estimated from the measurement \(\widehat{\mathbf {z}}= \mathbf {g}(\mathbf {x})+\mathbf {w}\), where \(\mathbf {w}\in {\mathbb {R}}^N\) is a random vector zero-mean Gaussian with covariance matrix \({\varvec{\mathbf {C}}}\), the likelihood function of \(\mathbf {x}\) is given as [34]

$$\begin{aligned} f_{{\widehat{\mathbf {z}}}}(\widehat{\mathbf {z}};\mathbf {x})= \frac{1}{|2\pi \mathbf {C}|^\frac{1}{2}}\exp \left\{ -\frac{1}{2}(\widehat{\mathbf {z}}-\mathbf {g}(\mathbf {x}))^T \mathbf {C}^{-1}(\widehat{\mathbf {z}}-\mathbf {g}(\mathbf {x}))\right\} \end{aligned}$$
(9)

For an unbiased estimate \(\widehat{\mathbf {x}}\) of \(\mathbf {x}\) , the CRB matrix gives the lowest bound for error covariance matrix among any unbiased estimators [34]. That is

$$\begin{aligned} E\left\{ (\widehat{\mathbf {x}}-\mathbf {x})(\widehat{\mathbf {x}}-\mathbf {x})^T\right] \geqslant \text{CRB}(\mathbf {x})\mathop = \limits ^\varDelta {\varvec{\varPhi }}(\mathbf {x})^{-1}. \end{aligned}$$
(10)

Here, \({\varvec{\varPhi }}(\mathbf {x})\) is the Fisher information matrix which is the inverse of CRB matrix and its (ij)th entry is [34],

$$\begin{aligned}{}[{\varvec{\varPhi }}(\mathbf {x})]_{i,j}= & {} E \left[ \frac{\partial }{\partial x_i} \text {ln}(f_{\widehat{\mathbf {z}}}(\widehat{\mathbf {z}};\mathbf {x})) \frac{\partial }{\partial x_j} \text {ln}(f_{\widehat{\mathbf {z}}}(\widehat{\mathbf {z}};\mathbf {x})) \right] \nonumber \\= & {} \frac{\partial \mathbf {g}^T}{\partial \mathbf {x}_i} \varvec{\varSigma }^{-1} \frac{\partial \mathbf {g}}{\partial \mathbf {x}_j} + \frac{1}{2} \mathbf {Tr}\left( \varvec{\varSigma }^{-1}\frac{\partial \varvec{\varSigma }}{\partial \mathbf {x}_i} \varvec{\varSigma }^{-1}\frac{\partial \varvec{\varSigma }}{\partial \mathbf {x}_j} \right) . \end{aligned}$$
(11)

In the case that the noise covariance matrix \(\mathbf {C}\) is independent of \(\mathbf {x}\), we obtain

$$\begin{aligned} \varvec{\varPhi } (\mathbf {x})= (\nabla _\mathbf {x}\mathbf {g}(\mathbf {x}))^T \mathbf {C}^{-1} (\nabla _\mathbf {x}\mathbf {g}(\mathbf {x})), \end{aligned}$$
(12)

In the above equation, \(\nabla _\mathbf {x}\mathbf {g}(\mathbf {x})\) is the Jacobian of \(\mathbf {g}(\mathbf {x})\). Based on (12), we obtain the FIM for RSSD localization problem as follows

$$\begin{aligned} {\varvec{\varPhi }}= \begin{bmatrix} {\varvec{\varPhi }}_{11} &{} {\varvec{\varPhi }}_{12} \\ {\varvec{\varPhi }}_{21} &{} {\varvec{\varPhi }}_{22} \end{bmatrix} = (\nabla _\mathbf {p}\varvec{\varDelta })^T {\varvec{\varSigma }}^{-1} (\nabla _\mathbf {p}\varvec{\varDelta }) = \mathbf {J}^T {\varvec{\varSigma }}^{-1} \mathbf {J}, \end{aligned}$$
(13)

in which \(\mathbf {J}\mathop = \limits ^\varDelta (\varvec{\nabla }_\mathbf {p}\varvec{\varDelta } )\) is the \((N - 1) \times 2\) Jacobian matrix which is computed as follows

$$\begin{aligned} \mathbf {J}= \left[ \frac{\partial (\varvec{\varDelta })}{\partial \mathbf {x}} \ \frac{\partial (\varvec{\varDelta })}{\partial y} \right] ^T= \begin{bmatrix} (\mathbf {z}_1-\mathbf {z}_N)^T\\ \vdots \\ (\mathbf {z}_{N-1}-\mathbf {z}_{N})^T \end{bmatrix}, \end{aligned}$$
(14)

in the above relation \(\mathbf {z}_i= \frac{\beta }{r_i} \left[ \cos {\theta _i} \ \ \sin {\theta _i} \right] ^T\) and \(\beta = \frac{10 \gamma }{\ln {10}}\).

Putting back (14) into (13) and useing \({\varvec{\varSigma }}^{-1}\) as computed in Appendix-A, with some mathematical manipulations we obtain

$$\begin{aligned} {\varvec{\varPhi }}=\sum _{i=1}^N{\frac{1}{\sigma _i^2} \mathbf {z}_i \mathbf {z}_i^T} - \alpha \left( \sum _{i=1}^N{\frac{1}{\sigma _i^2} \mathbf {z}_i} \right) \left( \sum _{i=1}^N{\frac{1}{\sigma _i^2} \mathbf {z}_i }\right) ^T. \end{aligned}$$
(15)

where \(\alpha = \left( \sum _{i=1}^N {\frac{1}{\sigma _i^2}} \right) ^{-1}\). Now, we substitute \(\mathbf {z}_i\) into (15) to obtain the entries of symmetric FIM as

$$\begin{aligned} {\varvec{\varPhi }}_{11}= & {} \beta ^2 \left[ \sum _{i=1}^N \frac{\cos ^2{\theta _i}}{r_i^2 \sigma _i^2} - \alpha \left( \sum _{i=1}^N \frac{\cos {\theta _i}}{r_i^2 \sigma _i^2}\right) ^2 \right] , \end{aligned}$$
(16a)
$$\begin{aligned} {\varvec{\varPhi }}_{12}={\varvec{\varPhi }}_{21}= & {} \beta ^2 \left[ \frac{1}{2} \sum _{i=1}^N \frac{\sin {2\theta _i}}{r_i^2 \sigma _i^2} -\alpha \sum _{i=1}^N \frac{\cos {\theta _i}}{r_i \sigma _i^2}\sum _{i=1}^N \frac{\sin {\theta _i}}{r_i \sigma _i^2} \right] , \end{aligned}$$
(16b)
$$\begin{aligned} {\varvec{\varPhi }}_{22}= & {} \beta ^2 \left[ \sum _{i=1}^N \frac{\sin ^2{\theta _i}}{r_i^2 \sigma _i^2} - \alpha \left( \sum _{i=1}^N \frac{\sin {\theta _i}}{r_i^2 \sigma _i^2}\right) ^2 \right] , \end{aligned}$$
(16c)

The Cramer-Rao lower bound (CRLB) is computed as

$$\begin{aligned} \text {CRLB} = \sqrt{\frac{{\varvec{\varPhi }}_{11}+{\varvec{\varPhi }}_{22}}{|{\varvec{\varPhi }}|}}. \end{aligned}$$
(17)

4 Optimum sensors arrangement

To analyze the relative sensor-source geometry we use the determinant of Fisher information matrix as our cost function. In the sequel, we obtain the relative sensor source geometries which maximize the determinant of FIM.

4.1 Determinant of FIM

The determinant of FIM in (16) can be written as

$$\begin{aligned} \left| {\varvec{\varPhi }}\right| =&\frac{\beta ^4}{4} \left[ \left( \sum _{i=1}^N \frac{1}{r_i^2 \sigma _i^2} \right) ^2-\left( \sum _{i=1}^N \frac{\cos {2\theta _i}}{r_i^2 \sigma _i^2}\right) ^2 - \left( \sum _{i=1}^N \frac{\sin {2\theta _i}}{r_i ^2\sigma _i^2} \right) ^2\right] \nonumber \\&- \alpha \beta ^4 \left[ \sum _{i=1}^N \frac{\cos ^2{\theta _i}}{r_i^2 \sigma _i^2}\left( \sum _{i=1}^N \frac{\sin {\theta _i}}{r_i \sigma _i^2}\right) ^2 + \sum _{i=1}^N \frac{\sin ^2{\theta _i}}{r_i^2 \sigma _i^2}\left( \sum _{i=1}^N \frac{\cos {\theta _i}}{r_i \sigma _i^2}\right) \right. \nonumber \\&\left. -\sum _{i=1}^N \frac{\sin {2\theta _i}}{r_i ^2\sigma _i^2} \left( \sum _{i=1}^N \frac{\cos {\theta _i}}{r_i \sigma _i^2}\right) \left( \sum _{i=1}^N \frac{\sin {\theta _i}}{r_i \sigma _i^2} \right) \right] . \end{aligned}$$
(18)

It can be seen that the determinant of the Fisher information matrix depends on range vector \(\mathbf {r}\mathop = \limits ^\varDelta [ r_1 \ \cdots \ r_N]^T\) and observation angle vector \({\varvec{\theta }}\mathop = \limits ^\varDelta [\theta _i \ \cdots \ \theta _N]^T\).

Theorem 1

For arbitrary SSG when \(\{r_i\}_{i=1}^N\) and \(\{\sigma _i\}_{i=1}^N\) are known fixed, the maximization of the determinant of FIM over \({\varvec{\theta }}\) is equivalent to following optimization problem

$$\begin{aligned} \widehat{{\varvec{\theta }}}= \arg \ \min _{{\varvec{\theta }}} \ \left\| \sum _{i=1}^N{\frac{\mathbf {v}(\theta _i)}{r_i^2 \sigma _i^2}} \right\| _2^2 +4 \alpha {\left\| \sum _{i=1}^N{\frac{\mathbf {u}(\theta _i)}{r_i \sigma _i^2}} \right\| }_\mathbf {H}^2 \ \end{aligned}$$
(19)

in which \(\mathbf {u}(\theta _i)=[\cos {\theta _i} \ \sin {\theta _i}]^T\), \(\mathbf {v}(\theta _i)=[\cos {2\theta _i} \ \sin {2\theta _i}]^T\) and \(\mathbf {H}= \sum \limits _{j=1}^N{\frac{1}{r_j^2 \sigma _j^2} \mathbf {u}(\theta _j - \frac{\pi }{2})}\) \( {\mathbf {u}^T(\theta _j - \frac{\pi }{2})}\) which is positive definite matrix unless \(\theta _j\) are equal.

Proof

We rewrite the \(|{\varvec{\varPhi }}|\) in (18) as

$$\begin{aligned} |\varPhi |=&\frac{\beta ^4}{4} \left( \sum _{i=1}^N \frac{1}{r_i^2 \sigma _i^2} \right) ^2-\frac{\beta ^4}{4} \left[ \left( \sum _{i=1}^N \frac{\cos {2\theta _i}}{r_i^2 \sigma _i^2}\right) ^2 +\left( \sum _{i=1}^N \frac{\sin {2\theta _i}}{r_i ^2\sigma _i^2} \right) ^2\right. \nonumber \\&\left. +4\, \alpha \sum _{i=1}^N \frac{1}{r_i^2 \sigma _i^2} \left( \sin {\theta _i} \sum _{j=1}^N \frac{\cos {\theta _i}}{r_j\sigma _j^2} - \cos {\theta _i} \sum _{j=1}^N \frac{\sin {\theta _i}}{r_j \sigma _j^2}\right) ^2 \right] . \end{aligned}$$
(20)

Since \(|{\varvec{\varPhi }}|\geqslant 0\) and the first term in the right hand of above equation is positive, clearly to maximize \(|{\varvec{\varPhi }}|\) over \({\varvec{\theta }}\), it is sufficient to minimize the terms in the bracket in above equation over \({\varvec{\theta }}\) as

$$\begin{aligned} \arg \ \max _{{\varvec{\theta }}} |{\varvec{\varPhi }}|\equiv \arg \ \min _{{\varvec{\theta }}} J(\mathbf {r},{\varvec{\theta }}), \end{aligned}$$
(21)

where

$$\begin{aligned} J(\mathbf {r},{\varvec{\theta }}) \mathop = \limits ^\varDelta&\left( \displaystyle \sum _{i=1}^N \frac{\cos 2\theta _i}{r_i^2\sigma _i^2} \right) ^2 + \left( \displaystyle \sum _{i=1}^N \frac{\sin 2\theta _i}{r_i^2\sigma _i^2} \right) ^2 \nonumber \\&+ 4\alpha \displaystyle \sum _{i=1}^N \frac{1}{r_i^2\sigma _i^2} \left( \cos \theta _i \displaystyle \sum _{j=1}^N \frac{\sin \theta _j}{r_j\sigma _j^2} - \sin \theta _i \displaystyle \sum _{j=1}^N \frac{\cos \theta _j}{r_j\sigma _j^2} \right) ^2. \end{aligned}$$
(22)

the above relation can be re-written as

$$\begin{aligned} \begin{array}{l} J(\mathbf {r},{\varvec{\theta }}) = \left\| \displaystyle \sum _{i = 1}^N \frac{\mathbf {v}(\theta _i)}{r_i^2\sigma _i^2} \right\| ^2 + 4\,\alpha \left[ \begin{array}{*{20}{c}} \displaystyle \sum _{i = 1}^N \frac{\cos \theta _i}{r_i\sigma _i^2} &{} \displaystyle \sum _{i = 1}^N \frac{\sin \theta _i}{r_i\sigma _i^2} \end{array} \right] \mathbf {H}\left[ \begin{array}{*{20}{c}} \displaystyle \sum _{i = 1}^N \frac{\cos \theta _i}{r_i\sigma _i^2} \\ \displaystyle \sum _{i = 1}^N \frac{\sin \theta _i}{r_i\sigma _i^2} \end{array} \right] \\ \qquad \quad = \left\| \displaystyle \sum \limits _{i = 1}^N \frac{\mathbf {v}(\theta _i)}{r_i^2\sigma _i^2} \right\| ^2 + 4\,\alpha \left\| \displaystyle \sum \limits _{i = 1}^N \frac{\mathbf {u}(\theta _i)}{r_i\sigma _i^2} \right\| _{\mathbf {H}}^2. \end{array} \end{aligned}$$
(23)

\(\square \)

4.2 Optimal SSG for some special scenarios

In this subsection we study the optimal sensor placement for various scenarios.

Theorem 2

In the case where the distances \(\{r_i\}_{i=1}^N\) are known fixed, the maximum attainable value of determinant of FIM is \(|{\varvec{\varPhi }}|_{\max } = \frac{\beta ^4}{4}{\left( {\sum _{i = 1}^N \frac{1}{r_i^2\sigma _i^2}} \right) ^2}\) which is obtained if and only if the following two equalities holds.

$$\begin{aligned} \sum _{i = 1}^N \frac{\mathbf {v}(\theta _i)}{r_i^2\sigma _i^2}= & {} 0 , \end{aligned}$$
(24a)
$$\begin{aligned} \sum _{i = 1}^N \frac{\mathbf {u}(\theta _i)}{r_i \sigma _i^2}= & {} 0. \end{aligned}$$
(24b)

Proof

From (18) and (22) we can write

$$\begin{aligned} \left| {{\varvec{\varPhi }}} \right| = \frac{\beta ^4}{4}\left[ \left( \sum \limits _{i = 1}^N \frac{1}{r_i^2\sigma _i^2} \right) ^2 - J\left( \mathbf {r},{\varvec{\theta }}\right) \right] \leqslant \frac{\beta ^4}{4}\left( \sum \limits _{i = 1}^N \frac{1}{r_i^2\sigma _i^2} \right) ^2 \end{aligned}$$
(25)

it is easy to see that the maximum of \(|{\varvec{\varPhi }}|\) is obtained whenever the two norms in (23) are zero, hence the optimal conditions (24) are obtained. \(\blacksquare \)

In this paper the geometries which satisfy these conditions are called as the optimal geometries. On the other hand the geometries which cannot satisfy the conditions Eqs (24), but \(|{\varvec{\varPhi }}|\) is maximized with the proper adjustment of \(\left\{ {\theta _i} \right\} _{i = 1}^N\) , are called as the sub-optimal geometries. It is easy the following corollary from the conclude of theorem-2.

Corollary 1

For equal sensor-source ranges (i.e., when \({r_1} =\cdots = {r_N} = r\)), and equal noise variances \(\sigma _1 = \cdots = \sigma _N = \sigma \), the upper bound of determinant of FIM is

$$\begin{aligned} |{\varvec{\varPhi }}| \leqslant \frac{\beta ^4 N^2}{4 r^4\sigma ^4}. \end{aligned}$$
(26)

where the equality holds if and only if

$$\begin{aligned} \sum \limits _{i = 1}^N \mathbf {u}(\theta _i) = 0, \end{aligned}$$
(27a)
$$\begin{aligned} \sum \limits _{i = 1}^N \mathbf {v}(\theta _i) = 0, \end{aligned}$$
(27b)

and in this scenario we have

$$\begin{aligned} {\text{CRLB}} = \frac{{2{r}{\sigma }}}{{{\beta }\sqrt{N}}}. \end{aligned}$$
(28)

\(\square \)

Corollary 2

For \(N \geqslant 3\) with the equal sensor-source ranges \(r_1=\cdots = r_N = r\) and equal noise variances \(\sigma _1 = \cdots = \sigma _N = \sigma \), equiangular sensor separation is the optimal geometry.

Proof

By defining the complex number \(V \mathop = \limits ^\varDelta e^{j\theta _1} + \cdots + e^{j\theta _N}\), we can write

$$\begin{aligned} \begin{array}{l} \sum \limits _{i = 1}^N {\cos \,{\theta _i} = \left| V \right| }\cos \angle V ,\\ \sum \limits _{i = 1}^N {\sin \,{\theta _i} = \left| V \right| }\sin \angle V , \end{array} \end{aligned}$$
(29)

where \(|\cdot |\) and \(\angle \) show the magnitude and the angle of a complex number, respectively. To satisfy the condition (27a), \(\left| v \right| \) should be zero, which is obtained when \({\theta _i}s\) have a uniform distribution on \([ 0,2\pi ]\). Similarly, to satisfy (27b), \(2{\theta _i}s\) should have a uniform distribution on \(\left[ {0,4\pi } \right] \).Footnote 1\(\square \)

Corollary 3

For \(N \geqslant 6\), when the sensor ranges and noise variances are equal, consider that we have partitioned the N sensors into q groups \(\{\nu _1,\ldots ,\nu _q\}\), then an optimal geometry of sensors is obtained if each subset of sensors \(\{\nu _i\}\) satisfies the conditions in (27) regardless of the relative arrangement of individual groups.

Proof

The conditions (27) can be written as

$$\begin{aligned} \sum \limits _{i = 1}^N {\mathbf {u}}({\theta _i}) = \sum _{k=1}^q \sum \limits _{ i \in \nu _k} {\mathbf {u}}({\theta _i}) , \end{aligned}$$
(30a)
$$\begin{aligned} \sum \limits _{i = 1}^N {\mathbf {v}}({\theta _i}) = \sum _{k=1}^q \sum \limits _{ i \in \nu _k} {\mathbf {v}}({\theta _i}) \end{aligned}$$
(30b)

Since for each subset \(\nu _i\) we have \( \sum \limits _{ i \in \nu _i} \mathbf {u}(\theta _i)=0\) and \(\sum \limits _{ i \in \nu _i} \mathbf {v}(\theta _i)=0\), hence the conditions (27) are satisfied. \(\square \)

Corollary 4

For \(N=3\) with the different sensor-source ranges and equal noise variances the determinant of FIM can be obtained as

$$\begin{aligned} |{\varvec{\varPhi }}|=\frac{\beta ^4}{3\sigma ^4(r_1r_2r_3)^2}\left[ r_3\sin (A)-r_2\sin (B)-r_1\sin (A-B)\right] ^2 \end{aligned}$$
(31)

in which, \( A=\theta _2-\theta _1, B=\theta _3-\theta _1\).

To obtain the optimal angular sensor separation, by taking the partial derivative of \(|{\varvec{\varPhi }}|\) with respect to A and B, optimal geometry can be obtained when either

$$\begin{aligned} r_3 \cos (A)=r_2 \cos (B)=r_1\cos (A-B) \end{aligned}$$
(32)

or

$$\begin{aligned} r_3 \sin (A)=r_2 \sin (B)+r_1 \sin (A-B) \end{aligned}$$
(33)

\(\square \)

To obtain optimal sensor placement for different sensor-source ranges using the gradient of FIM determinant, we can update the sensor configuration, so as to yield an increase in the specified convex combination of the FIM determinant.

5 Computer simulations

In this section, first, we present simulation results for two scenarios with equal sensor ranges and different sensor ranges. Then, we compare the performance of purposed optimal sensor placemnt for N=3,6 with other placements. Furthermore we present optimal placement based on the A-optimality and D-optimality. Finally we study the effect of sensor ranges on optimal sensor placement. In our simulation we assume \(\gamma =2, (\sigma _1=\sigma _2=\ldots =\sigma _N=-20 \text{dB})\) and we consider the sensor with angle \(\theta _1=0^\circ \) as the reference.

5.1 Equal sensor ranges

At first, we study optimal sensor geometries for \(N=3, 4\) with the equal sensor ranges \((r_i=1000\,\text{m}, i=1, 2, 3, 4)\). Figure 2 presents the determinant of FIM and the corresponding contour plot versus \(\theta _2\) and \(\theta _3\) for \(N =3\) with the equal sensor ranges. As Fig. 2b shows, the maximum value of \(|{\varvec{\varPhi }}|\) is attained when \({\theta _2} = {120^\circ }\), \({\theta _3} = {240^\circ }\) which validate the Corollary 1. Since the maximum value of \(|{\varvec{\varPhi }}|\) is 1.2807, which is equal to the upper-bound of Corollary 1, this solution is optimal geometry.

Figure 3 shows the \(|{\varvec{\varPhi }}|\) as function of \(\theta _3\) and \(\theta _4\) for \(N=4\), \(\theta _1=0^\circ , \theta _2=90^\circ \) with the equal sensor ranges.

Similar to the above example, equiangular sensor separation (i.e., \({\theta _3} = {180^\circ },{\theta _4} = {270^\circ }\)) gives the optimal sensor geometry where \(|{\varvec{\varPhi }}|=2.2768\). Two optimal geometries for \(N=3,\ 4\) are illustrated in Fig. 4.

Fig. 2
figure 2

a Determinant of FIM as a function of \(\theta _2\) and \(\theta _3\) for \(N=3\) with the equal sensor ranges; b contour plot of the determinant of FIM

Fig. 3
figure 3

a Determinant of FIM as a function of \(\theta _3\) and \(\theta _4\) for \(N=4\), \(\theta _2=90^{\circ }\) with equal sensor ranges; b the contour plot of the determinant of FIM

Fig. 4
figure 4

a Optimal SSG for \(N=3\); b optimal SSG for \(N=4\)

5.2 Different sensor ranges

Here, we study SSG when the sensors are located at different distances to the source. At first, we consider \(N = 3\) and \(\left( {{r_1},\,{r_2},\,{r_3}} \right) = (1000\,\text{m}, 1000\,\text{m}, 800\,\text{m})\). Figure 5, shows the determinant and contour plot of FIM versus \(\theta _2\) and \(\theta _3\). As Fig. 5b shows the maximum value of \(|{\varvec{\varPhi }}|\) is 1.7495, which is attained whenever \(\left\{ {\left( {{\theta _2},\,{\theta _3}} \right) } \right\} \in \left\{ {\left( {{{125}^\circ },\,\,{{242}^\circ }} \right) ,\,\left( {{{235}^\circ },\,\,{{118}^\circ }} \right) } \right\}. \) Since this maximum cannot reach to the upper bound of Theorem-2 (i.e., 1.806), these solutions are sub-optimal geometries. Two distinct sub-optimal geometries are illustrated in Fig. 6.

The determinant of FIM and corresponding contour plot versus \(\theta _3\) and \(\theta _4\) for \(N=4\) are illustrated in Fig. 7.

The bearing angle pairs \(\left\{ {\left( {{\theta _3},{\theta _4}} \right) } \right\} \) which maximize the determinant of FIM are given by the set of \(\left\{ {\left( {{\theta _3},{\theta _4}} \right) } \right\} \in \left\{ {\left( {{{178}^\circ },\,{{273}^\circ }} \right) ,\left( {{{280}^\circ },\,{{185}^\circ }} \right) } \right\} \). Here, the maximum value of \(|{\varvec{\varPhi }}|\) cannot reach to the upper bound of Theorem 2 (i.e.,4.8496), which two distinct sub-optimal configurations are illustrated in Fig. 8.

Fig. 5
figure 5

a Determinant of FIM as a function of \(\theta _2\) and \(\theta _3\) for \(N=3\) with \((r_1, r_2, r_3) = (1000\text{m}, 1000\text{m}, 800\text{m})\); b contour plot of the determinant of FIM

Fig. 6
figure 6

Two sub-optimal SSG for \(N = 3\) and \((r_1,r_2,r_3)=(1000,1000,800)\text {m}\)

Fig. 7
figure 7

a The determinant of FIM as a function of \(\theta _3\) and \(\theta _4\) for \(N=4\), \(\theta _2=90^{\circ }\) and \((r_1,r_2,r_3,r_4) = (1000 \text{m}, 900\text{m}, 800\text{m}, 700\text{m})\); b The contour plot of the determinant of FIM

Fig. 8
figure 8

Two sub-optimal SSG for \(N =4\) and \((r_1,r_2,r_3,r_4)=(1000,900,800,700)\text {m}\)

Fig. 9
figure 9

a The contour plot of the determinant of FIM for \(N=3\) with \(\theta _1=0^{\circ }, \theta _2=120^{\circ }, \theta _3=240^{\circ }\), \(\text{AV}_{{|{\varvec{\varPhi }}|}}=1595\) and \({|{\varvec{\varPhi }}|_{(0,0)}}=1.2867\); b the contour plot of the determinant of FIM for \(N=3\) with \(\theta _1=0^{\circ }, \theta _2=60^{\circ }, \theta _3=120^{\circ }\), \(\text{AV}_{{|{\varvec{\varPhi }}|}}=1576\) and \({|{\varvec{\varPhi }}|_{(0,0)}}=1.0943\)

5.3 Comparison of different sensor placements

In this part, we consider a region 1000 \(\text{m}\) \(\times \) 1000 \(\text{m}\) where the sensors are located at the stationary positions on the circle around the origin \([0, 0]^T\) with radios 1000 \(\text{m}\), then we plot the FIM determinant in this region. To study the performance of different of sensor placements, we compute two criteria including, the average \({|{\varvec{\varPhi }}|}\) as \(\text{AV}_{{|{\varvec{\varPhi }}|}}\) and the \({|{\varvec{\varPhi }}|}\) at the origin as \({|{\varvec{\varPhi }}|_{(0,0)}}\). We know the higher level of \({|{\varvec{\varPhi }}|}\) or \(\text{AV}_{{|{\varvec{\varPhi }}|}}\) give a batter performance.

Figure 9a shows the \(|{\varvec{\varPhi }}|\) for \(N=3\) with equiangular sensor separation \(120^\circ \). Figure 9b shows the \(|{\varvec{\varPhi }}|\) for \(\theta _1=0^\circ , \theta _2=60^\circ ,\theta _3=120^\circ \). By Comparing two placements, equiangular sensor separation gives a better performance due to the higher \(\text{AV}_{{|{\varvec{\varPhi }}|}}\) and higher \({|{\varvec{\varPhi }}|_{(0,0)}}\),

which validates the Corollary 1.Footnote 2

Figure 10 shows \(|{\varvec{\varPhi }}|\) for two placements with \(N=6\) sensors. As Fig. 10 illustrates, the first placement with equiangular sensor separation \(60^\circ \) gives a better performance in average due to higher \(\text{AV}_{{|{\varvec{\varPhi }}|}}\). On the other hand when the source is located at the origin, the two placements give equal performance due to equal \({|{\varvec{\varPhi }}|_{(0,0)}}\) which validates the Corollary 1, because in the second placement, the sensors are partitioned into two sets \(\{s_1, s_2, s_3\}\) and \(\{s_4, s_5, s_6\}\) with equiangular sensor separation \(120^\circ \).

Fig. 10
figure 10

a The contour plot of the determinant of FIM for \(N=6\) with equiangular sensor sepearation \({60}^\circ \), \(\text{AV}_{{|{\varvec{\varPhi }}|}}=1895\) and \({|{\varvec{\varPhi }}|_{(0,0)}}=5.1433\); b the contour plot of the determinant of FIM for \(\{\theta _1=0^\circ , \theta _2=120^\circ , \theta _3=240^\circ \}, \{\theta _4=90^\circ ,\theta _4=210^\circ ,\theta _4=330^\circ \)}, \(\text{AV}_{{|{\varvec{\varPhi }}|}}=1819\) and \({|{\varvec{\varPhi }}|_{(0,0)}}=5.1433\)

5.4 Optimal placement based on D-optimality and A-optimality

Here we consider two criteria for optimal sensor placement including maximization of determinant of FIM (D-optimality), and minimization of the trace of the inverse of FIM (A-optimality). D-optimality criterion minimizes the volume of the uncertainty ellipsoid for the source estimation [27]. A-optimality criterion, which consists in minimizing the trace of the CRLB matrix, suppresses the average variance of the estimation error, this creation introduced in Eq. (17). Now, we consider three sensors such that the source is located at the origin \([0, 0]^T\). Figure 11(a) shows the plot of CRLB as a function of \(\theta _{21}\) and \(\theta _{31}\). The optimal angels which minimizing CRLB are either \((\theta _{21}, \theta _{31})=(128^\circ , 244^\circ )\) or \((\theta _{21}, \theta _{31})=(230^\circ , 116^\circ )\). In Fig. 11(b), the plot of \(|{\varvec{\varPhi }}|\) is provided. The optimal angle that maximizing \(|{\varvec{\varPhi }}|\) are either \((\theta _{21}, \theta _{31})=(125^\circ , 243^\circ )\) or \((\theta _{21}, \theta _{31})=(235^\circ , 118^\circ )\). We observe that D-optimality and A-optimality, yield different sensor placements, whereas it has been shown in AOA-optimization geometry, the A-optimality is equivalent to the D-optimality [27].

Fig. 11
figure 11

a Contour plot of CRLB with \((r_1,r_2,r_3)=(1000,1000,800)\) m; b contour plot of determinant of FIM

5.5 Effect of sensor ranges on the performance of optimal placement

We study the effect of sensor ranges on the performance of optimal geometry using determinant of FIM. Based on previous statement, we know the higher level of \(|{\varvec{\varPhi }}|\) gives the better performance because it leads to minimization of uncertainty ellipsoid. We consider optimal geometries for \(N=3,4,5\), where the source is located at the center of these sensors. Then we plot the CRLB curves versus different sensor ranges in Fig. 12. As can be seen, with increasing the sensor ranges the \(|{\varvec{\varPhi }}|\) will be minimized. This means that with increasing the sensor ranges the uncertainty area of the source estimation will be maximized.

Fig. 12
figure 12

Determinant of FIM versus sensor ranges

6 Conclusion

In this paper, we provided a characterization of optimal sensor placement for RSSD based source localization using the maximization of determinant of FIM. We derived the necessary and sufficient conditions to obtain the optimal sensor placement for different scenarios. The results of this paper can be used to place the sensors around the source in order to obtain the best performance for source localization. Simulation results showed that the optimal sensor placement has a good performance compared to the other placements. Future work can focus on extending the results of this paper to optimal trajectory control for moving RSSD sensor platforms and the optimal sensor geometries for heterogeneous sensor networks.